Privacy, Security & Government Surveillance – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 15 Jun 2022 15:03:14 +0000 en-US hourly 1 6772528 VIDEO: My London Talk about the Future of AI Governance https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/ https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/#comments Mon, 13 Jun 2022 09:29:50 +0000 https://techliberation.com/?p=76999

On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:

  • What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
  • Which AI sectors are witnessing the most exciting forms of innovation currently?
  • What are the fundamental policy fault lines in the AI policy debates today?
  • Will fears about disruption and automation lead to a new Luddite movement?
  • How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
  • How did automation affect traditional jobs and sectors?
  • Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
  • How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
  • Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
  • What do we mean by “existential risk” as it pertains to artificial intelligence?

I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!

Additional Reading:

 

 

]]>
https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/feed/ 5 76999
New Jurimetrics Article: “Soft Law in U.S. ICT Sectors: Four Case Studies” https://techliberation.com/2021/02/01/new-jurimetrics-article-soft-law-in-u-s-ict-sectors-four-case-studies/ https://techliberation.com/2021/02/01/new-jurimetrics-article-soft-law-in-u-s-ict-sectors-four-case-studies/#comments Mon, 01 Feb 2021 21:02:45 +0000 https://techliberation.com/?p=76836

After a slight delay, Jurimetrics has finally published my latest law review article, “Soft Law in U.S. ICT Sectors: Four Case Studies.” It is part of a major symposium that Arizona State University (ASU) Law School put together on “Governing Emerging Technologies Through Soft Law: Lessons For Artificial Intelligence” for the journal. I was 1 of 4 scholars invited to pen foundational essays for this symposium. Jurimetrics is a official publication of the American Bar Association’s Section of Science & Technology Law.

This report was a major undertaking that involved dozens of interviews, extensive historic research, several events and presentations, and then numerous revisions before the final product was released. The final PDF version of the journal article is attached.

Here is the abstract:

Traditional hard law tools and processes are struggling to keep up with the rapid pace of innovation in many emerging technologies sectors. As a result, policy­makers in the United States rely increasingly on less formal “soft law” governance mech­anisms to address concerns surrounding many newer technologies. This Article explores four case studies from different information technology areas where soft law mechanisms have already been utilized to address governance concerns. These four sectoral case stud­ies include domain name management, content oversight, privacy policy, and cyberse­curity matters. After considering the various soft law mechanisms used to address those issues, the Article concludes with some general thoughts about the effectiveness of those approaches and what lessons those case studies might hold for the use of soft law in other emerging technology sectors and contexts.

]]>
https://techliberation.com/2021/02/01/new-jurimetrics-article-soft-law-in-u-s-ict-sectors-four-case-studies/feed/ 5 76836
The Continuing Data Privacy Debates and the Question of Enforcement https://techliberation.com/2020/05/07/the-continuing-data-privacy-debates-and-the-question-of-enforcement/ https://techliberation.com/2020/05/07/the-continuing-data-privacy-debates-and-the-question-of-enforcement/#comments Thu, 07 May 2020 13:25:07 +0000 https://techliberation.com/?p=76713

Recently, a group of Republican senators announced they plan to introduce the COVID-19 Consumer Data Protection Act of 2020 to address privacy concerns related to contact-tracing and other pandemic-related apps. This new bill will reinvigorate many of the ongoing concerns regarding a potential federal data privacy framework.

Even before the bill has been officially introduced, it has faced criticism from some groups for failing to sufficiently protect consumers. But a more regulatory approach that might appear protective on the surface also has consequences. The European Union’s (EU) General Data Protection Regulation (GDPR) has made it more complex to develop compliant contact-tracing apps and to run charitable responses that might need personal information. Ideally, data privacy policy around the specific COVID-19 concerns should have enough certainty to enable innovative responses while preserving civil liberties. Policymakers should approach this policy area in a way that enables consumers to choose which options work best for their own privacy preferences and not dictate a one-size-fits-all set of privacy standards.

A quick review of the current landscape of the data privacy policy debate

Unlike the EU, the United States has taken an approach that only creates privacy regulation for specific types of data. Specific frameworks address those areas that consumers would likely consider the most sensitive and expect increased protection, such as financial information, health information, and children’s information. In general, this approach has allowed new and innovative uses of data to flourish.

Following various scandals and data breaches and the expansive regulatory requirements of the EU’s GDPR, policymakers, advocates, consumers, and tech companies have begun to question if the United States should follow Europe’s lead, or instead create a different federal data protection framework, or even maintain the status quo. In the absence of federal action, states such as California have passed their own data privacy laws. The California Consumer Privacy Act (CCPA) became effective in January (you may remember a flurry of emails notifying you of privacy policy changes) and is set to become enforceable July 1. The lack of a federal framework means, with various state laws, the United States could go from an innovation-enabling hands-off approach to a disruptive patchwork, creating confusion for both consumers and innovators. A patchwork means that some beneficial products might not be available in all states because of differing requirements or that the most restrictive parts of a state’s law might become the de facto rule. To avoid this scenario, a federal framework would provide certainty to innovators creating beneficial uses of data such as contact-tracing apps (and the consumers that use them) while also clarifying the redress and any necessary checks to prevent harm.

Questions of Enforcement in the Data Privacy Debate

One key roadblock in achieving a federal privacy framework whether is the question of how such rules should be enforced. Some of the early criticism of the potential COVID-19 data privacy bill has been about the anticipated lack of additional enforcement.

Often the choices for data privacy enforcement are portrayed as a false dichotomy between the status quo or an aggressive private right of action, with neither side willing to give way. In reality, as I discuss in a new primer, there are a wide range of options for potential enforcement. Policymakers should build on the advantages of the current flexible approach that has allowed American innovation to flourish. This also provides a key opportunity to improve the certainty for both innovators and consumers when it comes to new uses of data. More precautionary and regulatory approaches could increase the cost and discourage innovation by burdening innovative products with the need for pre-approval. Ideally, a policy framework should preserve consumers and innovators’ ability to make a wide range of privacy choices but still provides redress in the case of fraudulent claims or other wrongful action.

There are tradeoffs in all approaches. Current Federal Trade Commission (FTC) enforcement has led to concerns around the use of consent decrees and the need for clarity. A new agency to govern data privacy could be a massive expansion of the administrative state. State attorneys general might interpret and enforce federal privacy law differently if not given clear guidance from the FTC or Congress. A private right of action could deter not only potentially harmful innovation but prevent consumers from receiving beneficial products out of concerns about litigation risks. I discuss each of these options and tradeoffs in more detail in the new primer mentioned earlier.

Policymakers should look to the success of the current approach and modify and increase enforcement to improve that approach, rather than pursue other options that could lead to some of the more pronounced consequences of intervention.

Conclusion

As we are seeing play out during the current crisis, all privacy regulation inevitably comes with tradeoffs. We should be cautious of policies that presume that privacy should always be the preferred value and instead look to address the areas of harm while allowing a wide range of preferences. When it comes to questions of enforcement and other areas of privacy legislation, policymakers should look to preserve the benefits of the American approach that has given rise to a great deal of innovation that could not have been predicted or dictated.

]]>
https://techliberation.com/2020/05/07/the-continuing-data-privacy-debates-and-the-question-of-enforcement/feed/ 2 76713
New Report: “Raising Rivals’ Costs Using the GDPR” (Just $1999!) https://techliberation.com/2019/10/10/new-report-raising-rivals-costs-using-the-gdrp-just-1999/ https://techliberation.com/2019/10/10/new-report-raising-rivals-costs-using-the-gdrp-just-1999/#comments Thu, 10 Oct 2019 19:19:12 +0000 https://techliberation.com/?p=76614

“Rent-Seeking Consultants, Inc.,” a subsidiary of the Strategies and Tactics to Annoy Neighbors (SATAN) Group, is pleased to announce its latest product for clients looking to exploit well-intentioned regulation to serve their own ends. Our new report, “Raising Rivals’ Costs Using the GDPR: A Strategic Guide to Thwarting Competition, Expanding Market Share & Enhancing Profits with Minimal Effort,” is available for immediate download for just $1,999 (discounted to just $999 for our loyal “Dante’s Ninth Circle” club members).

Over the last three decades, our experts at Rent-Seeking Consultants have dedicated themselves to the mission of advancing narrow interests at the expense of public welfare. We have done so by creatively exploiting laws and regulations that — while often implemented with the very best of intentions in mind — we recognized could be converted into a tool to advantage the few at the expense of the many.

Our motto: Where others see good intentions, we see good opportunities!

Our “Raising Rivals’ Costs Using the GDPR” report continues our latest line of new products, which aim to take Europe’s bold new privacy regulatory regime and convert it into a rent-seeker’s paradise. Our previous report outlined, “How to Pretend Compliance Costs Will Destroy Your Big Company, While Also Letting Your Shareholders Know It is Actually an Amazing Way to Crush the Competition.”

In our new report, we discuss how to weaponize the GDPR complaint process to your advantage. In this regard, some crowd-sourced efforts already exist, such as the “Ship Your Enemies GDPR” website. The site helps you take advantage of GDPR’s legal requirements by forcing rival firms to respond to as many frivolous claims as you can send their way. “We’ll help you send them a GDPR Data Access Request designed to waste as much of their time as possible,” the site notes.

More recently, angry gamers took to Reddit to devise a plan to use GDPR to harass gaming giant Blizzard. Fans were mad that Blizzard had kowtowed to the Chinese government by suspending a professional gamer who had voiced support for Hong Kong protestors. In essence, the Reddit protestors hope to use the GDPR to generate the equivalent of a DDOS attack on a company through massive, coordinated data requests. Brilliant!

We admire the spirit of these ingenious initiatives, but we aim to more fully capture the value associated with them for our clients using concerted manipulation of whatever political levers we can help you pull. How? Weaponizing complaint processes is a tactic that Rent-Seeking Consultants, Inc. has used effectively in the past. When a small handful of censorial-minded folks wanted to get the Federal Communications Commission to beef up fines and penalties for broadcast “indecency,” we helped them stuff the ballot box at the agency with form letters and fake complaints to make regulators believe the public was clamoring for greater censorship, when it reality it was just serving a very small group of people who wanted a heckler’s veto over broadcast programming. We tied those broadcasters up in courts for years with these tactics! Meanwhile, the new media operators we also represented were able to race ahead with whatever content they wanted to post on their platforms. Victory!

This led to the creation of our Scaring Consumers Really Effectively While Earning Money (SCREWEM™) initiative, which eventually won the prestigious Lobbying Award for Manipulating Effectively (LAME) Award in the “Creating Needless Panic” category. Our latest report highlights how we can use that same SCREWEM™ system to whip up serious privacy-related troubles for your rivals using the GDPR complaint process — all while pretending that this is all in the public interest.

We hope you will consider ordering our new report, and please let us know what we can do to help our trusted clients take advantage of well-intentioned regulation to undermine the public good on an ongoing basis. Finally, with California set to impose costly new privacy mandates extraterritorially on the entire nation, you can count on us being in touch again soon about exciting new opportunities for raising rivals’ costs using the machinery of the State.

Sincerely,

I.M. Prehensile Director of Strategic Political Exploits for S.A.T.A.N.


[This has been an act of satire, but the unintended consequences of GDPR are quite real. For some hard facts about what GDPR has meant in practice, see: Alec Stapp, “ GDPR after One Year: Costs and Unintended Consequences ,” and Eline Chivot and Daniel Castro, “ What the Evidence Shows About the Impact of the GDPR After One Year .” More generally, see: “Tech Policy, Unintended Consequences & the Failure of Good Intentions.”]

]]>
https://techliberation.com/2019/10/10/new-report-raising-rivals-costs-using-the-gdrp-just-1999/feed/ 3 76614
Is Facebook Now Over-moderating Content? https://techliberation.com/2018/09/10/is-facebook-now-over-moderating-content/ https://techliberation.com/2018/09/10/is-facebook-now-over-moderating-content/#comments Mon, 10 Sep 2018 14:30:32 +0000 https://techliberation.com/?p=76376

Reading professor Siva Vaidhyanathan’s recent op-ed in the New York Times, one could reasonably assume that Facebook is now seriously tackling the enormous problem of dangerous information. In detailing his takeaways from a recent hearing with Facebook’s COO Sheryl Sandberg and Twitter CEO Jack Dorsey, Vaidhyanathan explained,

Ms. Sandberg wants us to see this as success. A number so large must mean Facebook is doing something right. Facebook’s machines are determining patterns of origin and content among these pages and quickly quashing them.

Still, we judge exterminators not by the number of roaches they kill, but by the number that survive. If 3 percent of 2.2 billion active users are fake at any time, that’s still 66 million sources of potentially false or dangerous information.

One thing is clear about this arms race: It is an absurd battle of machine against machine. One set of machines create the fake accounts. Another deletes them. This happens millions of times every month. No group of human beings has the time to create millions, let alone billions, of accounts on Facebook by hand. People have been running computer scripts to automate the registration process. That means Facebook’s machines detect the fakes rather easily. (Facebook says that fewer than 1.5 percent of the fakes were identified by users.)

But it could be that, in their zeal to trapple down criticism from all sides, Facebook instead has corrected too far and is now over-moderating. The fundamental problem is that it is nearly impossible to know the true amount of disinformation on a platform. For one, there is little agreement on what kind of content needs to be policed. It is doubtful everyone would agree what constitutes fake news and separates it from disinformation or propaganda and how all of that differs from hate speech. But more fundamentally, even if everyone agreed to what should be taken down, it is still not clear that algorithmic filtering methods would be able to perfectly approximate that.

Detecting content that violates a hate speech code or a disinformation standard leads into a massive operationalization problem. A company like Facebook isn’t going to be perfect. It could produce a detection regime that was either underbroad or overbroad. It is of course only minimal evidence, but I have been seeing a lot of my friends on Facebook post about how their own posts have been taken down and it was clear they were non-political.

Over-moderation could explain why many conservatives have been worried about Twitter and Facebook engaging in soft censorship. Paula Bolyard made a convincing case in the Washington Post ,

There have been plenty of credible reports over the past two years claiming anti-conservative bias at the Big Three Internet platforms, including the 2016 revelation that Facebook had routinely suppressed conservative outlets in the network’s “trending” news section. Further, when Alphabet-owned YouTube pulls down and demonetizes mainstream conservative content from sites such as PragerU, it certainly gives the impression that the company has its thumb on the scale.

Bolyard hints at one of the biggest problems in the conversation today. Users cannot peer behind the veil and are thus forced to impute intentions about how the network operates in practice. Here is how Sarah Myers West, a postdoc researcher at the AI Now Institute, described the process,    

Many social network users develop “folk theories” about how platforms work: in the absence of authoritative explanations, they strive to make sense of content moderation processes by drawing connections between related phenomena, developing non-authoritative conceptions of why and how their content was removed

West goes on to cite a study of moderation efforts , which found that users thought Facebook was “powerful, perceptive, and ultimately unknowable.” Both Vaidhyanathan and Bolyard could pushing similar folk theories. They are both astute in their comments and offer a lot to consider, but everyone in this discussion, including the operators at Facebook and Twitter, is hobbled by a fundamental knowledge problem.

Still, each platform has to create its own means of detecting this content, which will need to conform to the specifics of the platform. Evelyn Douek’s report on the Senate Hearing , which you should absolutely go read, helps to fill out some of the details on this point,

[Twitter CEO Jack] Dorsey stated that Twitter does not focus on whether political content originates abroad in determining how to treat it. Because Twitter, unlike Facebook, has no “real name” policy, Twitter cannot prioritize authenticity. Dorsey instead described Twitter as focusing on the use of artificial intelligence and machine learning to detect “behavioural patterns” that suggest coordination between accounts or gaming the system. In a sense, this is also a proxy for a lack of authenticity, but on a systematic rather than an individual scale. Twitter’s focus, according to Dorsey, is on how people game the system in the “shared spaces [on Twitter] where anyone can interject themselves,” rather than the characteristics of profiles that users choose to follow.

Dorsey seems to set up a comparison between the two companies. Facebook’s method of detecting nefarious content deals with the profile, as an authenticated person, in relation to the content that is shared. Twitter, on the other hand, is looking for people to game the system in the “shared spaces [on Twitter] where anyone can interject themselves.” It might be a misread, but Dorsey suggests that Twitter is emphasizing the actions of users, which would lead to a more structural approach.

It goes without saying that Facebook’s social network is different from Twitter’s, leading to different approaches in moderation. Facebook creates dyadic connections. The relationships on Facebook run both ways. Becoming friends means we are in a mutual relationship. Twitter, however, allows for people to follow others without reciprocity. The result are distinct network structures. Pew, for example, was able to distinguish between six different broad structures , including polarized crowds, tight crowds, brand clusters, community clusters, broadcast networks, and support networks. Combined, these features make it difficult for both researchers and operators to understand the scope of the problem and how solutions are working, or not working.

So what are the broad incentives pushing platforms to either over-moderate or under-moderate content? Here is what I could come up with:  

  • If content moderation is too broad, it will spark the ire of content creators who might get inadvertently caught up in a filter.
  • More content going over the network means more users and more engagement, and thus more advertising dollars, making the platform sensitive to over-moderation.
  • Content has both an extensive marginal and an intensive marginal. Facebook will want to expand the overall amount of content to attract people, but they will want to keep the content on the network high quality. Low quality will drive people and advertisers to exit, so they might have an incentive to over moderate.
  • Given the current political environment and the California privacy bill, it might make better long term sense to over-moderate or at least engage in the perception of over-moderation to reduce the chance of legal or regulatory pressures in the future.
  • The technical filtering solutions could have ambiguous effects on moderation. It could be that a platform simply is not that good at content moderation and has been under providing it.
  • Or, the filtering system could be providing an expansive program that has swept up too many people and too much content.
  • Given that people think these platforms are “powerful, perceptive, and ultimately unknowable,” the platforms might err on the side of under-moderation simply to reduce the overall experience of content moderation.

Content moderation at scale is difficult. And messy. In creating a technical regime to deal with this problem, we shouldn’t expect platforms to get it perfect. While many have criticized platforms for under-moderation, they might now being over-moderating. Still, there is a massive knowledge problem in trying to understand if the current level of moderation is optimal.    

]]>
https://techliberation.com/2018/09/10/is-facebook-now-over-moderating-content/feed/ 1 76376
The Problem of Patchwork Privacy https://techliberation.com/2018/08/15/the-problem-of-patchwork-privacy/ https://techliberation.com/2018/08/15/the-problem-of-patchwork-privacy/#respond Wed, 15 Aug 2018 15:43:18 +0000 https://techliberation.com/?p=76345

There are a growing number of voices raising concerns about privacy rights and data security in the wake of news of data breaches and potential influence. The European Union (EU) recently adopted the heavily restrictive General Data Privacy Rule (GDPR) that favors individual privacy over innovation or the right to speak. While there has been some discussion of potential federal legislation related to data privacy, none of these attempts has truly gained traction beyond existing special protections for vulnerable users (like children) or specific information (like that of healthcare and finances). Some states, notably including California, are attempting to solve this perceived problem of data privacy on their own, but often are creating bigger problems and passing potentially unconstitutional and often poorly drafted solutions.

All states have at least minimal data breach laws and the quality of such laws both in effectiveness and impact on innovation varies. Normally states work as “laboratories of democracy” and are able to test out different regulatory schemes for new technologies with less demosclerosis than the federal process. Similarly, they are better able to account for different preferences in tradeoffs, and in some cases, they are more able to remove barriers to entry by reforming existing areas of law like licensure or products liability to accommodate a new technology. In areas like autonomous vehicles, telemedicine, and drone policy states are often leading the way to embrace these new technologies. However, a new trend in some states to formally regulate the Internet through laws aimed at data privacy or net neutrality to achieve what they perceive as failures of the federal government to act ignores the potential damage to the permissionless federal policy that made the Internet what it is today.

California has passed the California Consumer Privacy Act (CCPA) and other states are likely to follow suit. Unfortunately, these type of statutes are likely to impact innovation in a misguided attempt to correct issues with data privacy. However, these statutes could reach far beyond state borders and illustrate the potential risks of a fifty-state privacy patchwork.

These laws will likely lead to a problem in identifying what entities are covered by the privacy legislation. California’s recent CCPA defines those who are required to comply so ambiguously that a reasonable interpretation would imply the law applies so long as a single user is a resident of California whether they are accessing the website from California or not and no matter if the website purposefully avails itself of California or not.

State laws also unintentionally make it more difficult for small, local companies to compete with Internet giants. Large companies like Google and Facebook can afford the cost of additional compliance but it is more difficult for smaller and mid-size companies to cover such costs. As a result, if they are able to comply they often are more limited in their ability to fund future innovation as they instead invest resources in compliance. In a world of state based privacy laws, it’s inevitable that some would impose contradictory standards and as a result might actually make it worse rather than better as companies pick and choose which states to comply with. What is already playing out in Europe where small and mid-size companies are choosing to exit the market rather spend the cost in complying with new restrictions could play out for states with more restrictive data requirements. And it’s not just fledging startups that have difficulty, the L.A. Times and Chicago Tribune have been unavailable to Europeans since GDPR became effective as they had not completed compliance by the May deadline. In some cases companies have founded it easier to block or exclude effected users than to comply with onerous data restrictions.

In some cases, states making exceptions for companies below a certain number of user also may discourage investment at a certain point. For example the CCPA kicks in at 50,000 users. As a result there is a large marginal costs for gaining 50,001 st user as compliance with the standards are immediately required. This might lead to caps on certain newer platforms or encourage innovators to look for loopholes to avoid the high cost of compliance early on.

But even if states were able to create a sort of interstate compact that created an effectively uniform state level set of privacy laws, it would still be an inappropriate use of federalism for the state to govern data privacy due to its de facto impact on interstate commerce and the First Amendment.

The Internet by its very nature transcends states borders and any state laws aimed at impacting privacy are likely to have national and global impact. This is not what is intended by federalism and not just the case for states like California with a significant amount of tech companies. If there are 50 different state laws than new online intermediaries will have  develop 50 different compliance policies or the most restrictive state will become the de facto standard for everyone left in the industry. As Jeff Kosseff points out, a world of 50 variations of the same privacy law based on users would require out-of-state content creators would likely require significant changes to their existing systems and place an undue burden on content creators and users.

Additionally, there are legitimate concerns about the First Amendment rights to share information that may be in conflict with the way privacy rights are enforced under proposed laws. Requiring otherwise lawful content to be removed silences the speaker. For example, if a friend posts a picture from a party that includes you and you ask all your data be removed is that data yours or your friends. To remove the data would silence a speaker and value one individual’s right to privacy over another’s right to speak. In some cases it seems such tradeoffs could be reasonable such as speech that is not just merely offensive but causes clear harm to the person it is about such as revenge porn, but in many cases it is far less clear. Unfortunately when faced with the crippling potential sanctions of such laws, many companies take a remove first question second approach as has been seen with copyright under the Digital Millennium Copyright Act (DMCA).

While there is a growing voice for data privacy, there seems to be little willingness on the part of consumers or regulators to make such tradeoffs. The so called “privacy paradox” where people do not undertake the necessary actions to match with their stated desire for increased data privacy and many willingly admit they prefer the convenience they receive in exchange for their data. If action on data privacy is necessary, it should occur at a federal level to avoid the patchwork problems that would result from inconsistent state laws. Any law must be narrowly tailored to respect the First Amendment rights of both users and platforms. We also must be aware of the tradeoffs that we are making between innovation and privacy when we see calls for a US GDPR. At the same time we should be concerned that as a result of the heavy burden of compliance with GDPR, a more regulated Internet where only those who can afford to comply survive may replace the permissionless start-up American driven version.

While federal preemption may be needed to address a patchwork of state privacy laws, we should be cautious and seek to avoid the mistakes of GDPR type privacy laws that place a value on individual privacy above innovation and knowledge sharing. Simple steps in providing more transparent information and requirements for notification are more likely to allow individuals to make the privacy choices that best fit their needs.

A privacy patchwork of state based “solutions” is likely to create more problems than it solves. The real solutions to our current dilemmas will come from conversations about how we balance the rewards of innovation with individual preferences for privacy.

]]>
https://techliberation.com/2018/08/15/the-problem-of-patchwork-privacy/feed/ 0 76345
GDPR Compliance: The Price of Privacy Protections https://techliberation.com/2018/07/09/gdpr-compliance-the-price-of-privacy-protections/ https://techliberation.com/2018/07/09/gdpr-compliance-the-price-of-privacy-protections/#respond Tue, 10 Jul 2018 00:43:36 +0000 https://techliberation.com/?p=76312

In preparation for a Federalist Society teleforum call that I participated in today about the compliance costs of the EU’s General Data Protection Regulation (GDPR), I gathered together some helpful recent articles on the topic and put together some talking points. I thought I would post them here and try to update this list in coming months as I find new material. (My thanks to Andrea O’Sullivan for a major assist on coming up with all this.)

Key Points :

  • GDPR is no free lunch; compliance is very costly
      • All regulation entails trade-offs, no matter how well-intentioned rules are
      • $7.8 billion estimated compliance cost for U.S. firms already
      • Punitive fees can range from €20 million to 4 percent of global firm revenue
      • Vagueness of language leads to considerable regulatory uncertainty — no one knows what “compliance” looks like
      • Even EU member states do not know what compliance looks like: 17 of 24 regulatory bodies polled by Reuters said they were unprepared for GDPR
  • GDPR will hurt competition & innovation; favors big players over small
      • Google, Facebook & others beefing up compliance departments. (“ EU official, Vera Jourova: “They have the money, an army of lawyers, an army of technicians and so on.”)
      • Smaller firms exiting or dumping data that could be used to provide better, more tailored services
      • PwC survey found that 88% of companies surveyed spent more than $1 million on GDPR preparations, and 40% more than $10 million.
      • Before GDPR, half of all EU ad spend went to Google. The first day after it took effect, an astounding 95 percent went to Google.
      • In essence, with the GDPR, the EU is surrendering on the idea of competition being possible going forward
      • The law will actually benefit the same big companies that the EU has been going after on antitrust grounds. Meanwhile, the smaller innovators and innovations will suffer.

  • GDPR likely to raise costs to consumers, or diminish choice/quality
      • Consumers care about privacy, but they also care about choice, convenience, and low-cost services
      • The modern data-driven economy has given consumers access to an unparalleled cornucopia of information and services and it is remarkable how much of that content and how many of those services are offered to the public at no charge to them. That’s a real benefit.  
      • But if you take all the data out of the Data Economy, you won’t have much of an economy left
      • “Many organizations will pass these costs on to consumers either by erecting paywalls or forcing users to view more ads.”
      • Websites blacked out post GDPR: Instapaper, Los Angeles Times , Chicago Tribune (all Tronc- and Lee Enterprises-owned media platforms), A&E Networks websites.
      • “EU-only” web experience: stripped down websites without illustration or images. NPR and USA Today .
      • Washington Post is charging for a more expensive GDPR compliant subscription.
  • GDPR hurts global flow of information; worsens problem of data localization
    • Rules only allow data to move to jurisdictions that offer an adequate level of protection
    • Cloud computing? Cloud architects are building costly new infrastructure that can isolate and inspect EU data to ensure it is not “sent” to the wrong jurisdiction.
    • Another step toward a more “bordered” Internet
    • Likely to just create more walled gardens
    • Max Schrems: “Unfortunately data localization is probably the best solution right now. It’s not really a solution that appeals to me a lot, but I think we need data localization for other reasons anyways, like load times and so on.”
    • Roundabout way to impose tariffs? Data-based firms are largely external to EU.
  • GDPR doesn’t solve bigger problem of government access to data
    • EU Data Retention Directive: third parties must keep data for law enforcement for two years (passed after terrorist attacks).
    • EU member states often have no FISA-like body overseeing government wiretap requests. France and the UK have no court apparatus governing surveillance — instead issued directly by administrative bodies. In Germany, their FBI equivalent can install a “Federal Trojan” virus directly into third party platforms without their knowledge.
  • GDPR doesn’t really move the needle much in terms of real privacy protection
    • heavy-handed, top-down regulatory regimes don’t always accomplish their goals when it comes to privacy
    • what consumers need is new competitive options and privacy innovations
    • Unfortunately, the world won’t get the new choices we need if regulations like the GDPR essentially punish them with regulatory compliance costs that only the largest current incumbents can possibly absorb

Related Research & Articles :

]]>
https://techliberation.com/2018/07/09/gdpr-compliance-the-price-of-privacy-protections/feed/ 0 76312
A Roundup of Commentary on the Supreme Court’s Carpenter v. United States Decision https://techliberation.com/2018/06/25/a-roundup-of-commentary-on-the-supreme-courts-carpenter-v-united-states-decision/ https://techliberation.com/2018/06/25/a-roundup-of-commentary-on-the-supreme-courts-carpenter-v-united-states-decision/#comments Mon, 25 Jun 2018 13:08:42 +0000 https://techliberation.com/?p=76289

On Friday, the Supreme Court ruled on Carpenter v. United States, a case involving the cell-site location information. In the 5 to 4 decision, the Court declared that “The Government’s acquisition of Carpenter’s cell-site records was a Fourth Amendment search.” What follows below is a roundup of reactions and comments to the decision. 

Ashkhen Kazaryan, Legal Fellow at TechFreedom, had this to say about the ruling:

This ruling recognizes the immensely sensitive nature of cell phone location data, and rightly requires a showing of probable cause before law enforcement can obtain location information from mobile carriers. Our country’s Founders would have expected no lesser safeguards to apply to non-stop surveillance. Indeed, the American Revolution was first instigated over surveillance that was far less invasive.

Ryan Radia at Competitive Enterprise Institute commended the decision:

Although the court’s opinion was narrowly crafted to address the particular facts in this case, its decision underscores the court’s willingness to apply rigorous scrutiny to governmental surveillance involving new technologies. In the United States, the Constitution protects people from unreasonable searches and seizures, and Fourth Amendment protection should apply to private information held on or collected through our personal devices.

Curt Levy, president of Committee for Justice, penned an op-ed in Fox News:

Rapid technological change inevitably outpaces the glacial evolution of the law and the Carpenter case is a perfect example. The location data in question was obtained under the Stored Communications Act (SCA), which did not require prosecutors to meet the “probable cause” standard of a warrant.

So Timothy Carpenter turned to the Constitution. But the Justice Department argued that the Fourth Amendment didn’t apply because of the Supreme Court’s Third-Party Doctrine. That doctrine holds that no search or seizure occurs when the government obtains data that the accused has voluntarily conveyed to a third party – in this case, one’s wireless provider.

The Third-Party Doctrine made some sense when it was invented 40 years ago. However, when applied to today’s modern technology, the doctrine results in a gaping hole in the Fourth Amendment…

The good news is that the Supreme Court took a big step towards repairing that hole Friday. In an opinion by Chief Justice John Roberts, the court acknowledged that Fourth Amendment doctrines must evolve to account for “seismic shifts in digital technology.”

Orin Kerr runs through nine questions you might have on the decision over at the Volokh Conspiracy:

(9) Does This Reasoning Apply Just For Physical Location Tracking, Or Does It Apply More Broadly?

That’s the big question. On one hand, the reasoning of the opinion is largely about tracking a person’s physical location. The opinion takes as a given that you have a reasonable expectation of privacy in the “whole” of your “physical movements.” The Court has never held that, so it’s sort of an unusual thing to just assume! But the Court seems to be getting it mostly from Justice Alito’s Jones concurrence, and the idea, as Alito wrote in Jones, that “society’s expectation has been that law enforcement agents and others would not— and indeed, in the main, simply could not—secretly monitor and catalogue every single movement of an individual’s car for a very long period.” …

On the other hand, there’s lots of language in the opinion that cuts the other way. Although the Court “decides no more than the case before us,” it also recasts a lot of doctrine in ways that could be used to argue for lots of other changes. Its use of equilibrium-adjustment will open the door to lots of new arguments about other records that are also protected. For example, what is the scope of this reasonable expectation of privacy in the “whole” of physical movements? Why is there? The Jones concurrences were really light on that, and Carpenter doesn’t do much beyond citing them for it: What is this doctrine and where did it come from? (And what other reasonable expectations of privacy in things do people have that we didn’t know about, and what will violate them?)

Cato’s Ilya Shapiro and Julian Sanchez comment on the Supreme Court’s decision in this Cato Daily podcast.

Columbia Law Professor Eben Moglen of the Software Freedom Law Center also opined on the decision:

The decision in Carpenter v. United States is a groundbreaking change in the application of the Fourth Amendment in digital society. By stating that the pervasive geographic location data assembled by cellular providers is not insulated from the warrant requirement even though it is information collected by third parties, the Court has fundamentally changed the principles underlying the application of the Amendment before today. The Court has stated that its present decision is narrow and factual, but a flood of further cases will seek to widen the meaning of today’s opinion.

]]>
https://techliberation.com/2018/06/25/a-roundup-of-commentary-on-the-supreme-courts-carpenter-v-united-states-decision/feed/ 1 76289
How Well-Intentioned Privacy Regulation Could Boost Market Power of Facebook & Google https://techliberation.com/2018/04/25/how-well-intentioned-privacy-regulation-could-boost-market-power-of-facebook-google/ https://techliberation.com/2018/04/25/how-well-intentioned-privacy-regulation-could-boost-market-power-of-facebook-google/#respond Wed, 25 Apr 2018 14:25:08 +0000 https://techliberation.com/?p=76261

Image result for Zuckerberg Schmidt laughing

Two weeks ago, as Facebook CEO Mark Zuckerberg was getting grilled by Congress during a two-day media circus set of hearings, I wrote a counterintuitive essay about how it could end up being Facebook’s greatest moment. How could that be? As I argued in the piece, with an avalanche of new rules looming, “Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.”

With the exception of probably only Google, no firm other than Facebook likely has enough lawyers, lobbyists, and money to deal with layers of red tape and corresponding regulatory compliance headaches that lie ahead. That’s true both here and especially abroad in Europe, which continues to pile on new privacy and “data protection” regulations. While such rules come wrapped in the very best of intentions, there’s just no getting around the fact that  regulation has costs. In this case, the unintended consequence of well-intentioned data privacy rules is that the emerging regulatory regime will likely discourage (or potentially even destroy) the chances of getting the new types of innovation and competition that we so desperately need right now.

Others now appear to be coming around to this view. On April 23, both the  New York Times and The Wall Street Journal ran feature articles with remarkably similar titles and themes. The New York Times article by Daisuke Wakabayashi and Adam Satariano was titled, “How Looming Privacy Regulations May Strengthen Facebook and Google,” and The Wall Street Journal’s piece, “Google and Facebook Likely to Benefit From Europe’s Privacy Crackdown,” was penned by Sam Schechner and Nick Kostov. “In Europe and the United States, the conventional wisdom is that regulation is needed to force Silicon Valley’s digital giants to respect people’s online privacy. But new rules may instead serve to strengthen Facebook’s and Google’s hegemony and extend their lead on the internet,” note Wakabayashi and Satariano in the  NYT essay. They continue on to note how “past attempts at privacy regulation have done little to mitigate the power of tech firms.” This includes regulations like Europe’s “right to be forgotten” requirement, which has essentially put Google in a privileged position as the “chief arbiter of what information is kept online in Europe.” Meanwhile, the  WSJ article opens with this interesting story about the epiphany EU regulator Věra Jourová had upon visiting with the supposed victims of the EU’s new General Data Protection Regulation, or GDPR:
When the European Union’s justice commissioner traveled to California to meet with Google and Facebook last fall, she was expecting to get an earful from executives worried about the Continent’s sweeping new privacy law. Instead, she realized they already had the situation under control. “They were more relaxed, and I became more nervous,” said the EU official, Věra Jourová. “They have the money, an army of lawyers, an army of technicians and so on.”
Image result for Google Brin laughingIndeed they do. And that means that they are better positioned to absorb the significant costs of compliance that will be associated with the new GDPR rules, which are somewhat ambiguous and will require a great deal of ongoing interpretation and legal wrangling.  The Journal essay also cites an unnamed Brussels lobbyist for an media-measurement firm saying, “The politicians wanted to teach Google and Facebook a lesson. And yet they favor them.” Consider this paragraph from the WSJ essay about how the two firms worked diligently to come into compliance with the new GDPR regulations:
Once the law passed in spring 2016, Google and Facebook threw people at the problem. Google involved lawyers in the U.S., Ireland, Brussels and elsewhere to pore over contracts and procedures, said people close to the company. Facebook mobilized hundreds of people in what it describes as the largest interdepartmental team it has ever assembled. Facebook lawyers spent a year scrutinizing the law’s lengthy text. Designers and engineers then toiled over how to implement changes, according to Stephen Deadman, Facebook’s global deputy chief privacy officer. During the process, Facebook got frequent access to regulators across Europe. It met with Helen Dixon, the data protection commissioner in Ireland, where the company bases its European operations, and her staff to run through changes Facebook was planning. Ms. Dixon’s agency provided the firm with feedback on the wording of its consent requests, Facebook said.
Now ask yourself how many other smaller existing or new firms would be in a position to do the same thing. Answer: Not many. We’re already seeing the deleterious effects of the GDPR on market structure, the  Journal reports. “Some advertisers are planning to shift money away from smaller providers and toward Google and Facebook,” Schechner and Kostov note. And they end their essay with the telling thoughts of Bill Simmons, co-founder and chief technology officer of Dataxu, Boston-based company that helps buy targeted ads, who says, “It is paradoxical. The GDPR is actually consolidating the control of consumer data onto these tech giants.” The  NYT essay included a funny tidbit about how “Some privacy advocates also bristle at the idea that these new restrictions would help already powerful internet companies, noting that is a well-worn argument employed by tech giants to try to prevent future regulation.” That’s a highly unfortunate attitude. If privacy advocates really care about improving the situation on the ground, then the best way to do that is with more and better choices. Sadly, it seems that with each passing day the write off the idea of any new competition emerging to today’s tech giants. “Can Facebook be replaced?” asks Olivia Solon writing in The Guardian today. Some probably think not, but as Solon notes, “prominent Silicon Valley investor Jason Calacanis, who was an early investor in several high-profile tech companies including Uber certainly hopes so. He has launched a competition to find a ‘social network that is actually good for society,'” and his “Openbook Challenge will offer seven “purpose-driven teams” $100,000 in investment to build a billion-user social network that could replace the technology titan while protecting consumer privacy.” In a blog post announcing the Challenge, Calacanis wrote: “All community and social products on the internet have had their era, from AOL to MySpace, and typically they’re not shut down by the government — they’re slowly replaced by better products. So, let’s start the process of replacing Facebook.” I don’t have any idea whether this Openbook Challenge will succeed. It’s hard building big, scalable digital platforms that satisfy the diverse needs of a diverse world. But this is exactly the sort of innovation that we should be encouraging. Even the very threat of new competition will keep the big dogs on their toes. Alas, all the new regulations being consider will likely just leave us with fewer choices and regulations that probably won’t even do all that much to truly better protect our data or privacy. But hey, at least it was all well-intentioned!

Updates :

]]>
https://techliberation.com/2018/04/25/how-well-intentioned-privacy-regulation-could-boost-market-power-of-facebook-google/feed/ 0 76261
The Week Facebook Became a Regulated Monopoly (and Achieved Its Greatest Victory in the Process) https://techliberation.com/2018/04/10/the-week-facebook-became-a-regulated-monopoly-and-achieved-its-greatest-victory-in-the-process/ https://techliberation.com/2018/04/10/the-week-facebook-became-a-regulated-monopoly-and-achieved-its-greatest-victory-in-the-process/#comments Tue, 10 Apr 2018 20:30:45 +0000 https://techliberation.com/?p=76253

With Facebook CEO Mark Zuckerberg in town this week for a political flogging, you might think that this is darkest hour for the social networking giant. Facebook stands at a regulatory crossroads, to be sure. But allow me to offer a cynical take, and one based on history: Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.

By slowly capitulating to critics (both here and abroad) who are thirsty for massive regulation of the data-driven economy, Facebook is setting itself up as a servant of the state. In the name of satisfying some amorphous political “public interest” standard and fulfilling a variety of corporate responsibility objectives, Facebook will gradually allow itself to be converted into a sort of digital public utility or electronic essential facility.

That sounds like trouble for the firm until you realize that Facebook is one of the few companies who will be able to sacrifice a pound of flesh like that and remain alive. As layers of new regulatory obligations are applied, barriers to new innovations will become formidable obstacles to the very competitors that the public so desperately needs right now to offer us better alternatives. Gradually, Facebook will recognize this and go along with the regulatory schemes. And then eventually they will become the biggest defender of all of it.

Welcome to Facebook’s broadcast industry moment. The firm is essentially in the same position the broadcast sector was about a century ago when it started cozying up to federal lawmakers. Over time, broadcasters would warmly embrace an expansive licensing regime that would allow all parties—regulatory advocates, academics, lawmakers, bureaucrats, and even the broadcasters themselves—to play out the fairy tale that broadcasters would be good “public stewards” of the “public airwaves” to serve the “public interest.”

Alas, the actual listening and viewing public got royally shafted in this deal. Broadcasters got billions of dollars’ worth of completely free beachfront spectrum along with protected geographic monopolies. Congressional lawmakers and the unelected bureaucrats at the FCC got power to tinker with broadcast content and received other special favors (like free airtime) from their cronies in the industry. People, money, and influence floated freely between the political and business realms until at some point there really wasn’t much distinction between them. Meanwhile, the public got stuck with bland fare and limited competition for their ears and eyes. The “public interest” ended up meaning many things during this time, but it rarely had much to do with what the public actually desired—namely, more and better options for a diverse citizenry.

Of course, much the same story played out in the U.S. telecommunications market a few decades prior to the broadcast industry making their deal with the devil. The early history of telecommunications in America was characterized by competition among a variety of local and regional rivals. But it was derailed by political shenanigans. Here are a few choice paragraphs about the cronyist origins of the Bell System monopoly from a law review article that Brent Skorup and I wrote back in 2013 [footnotes omitted]. As you read it, imagine how similar well-intentioned regulations might play out for Facebook:

… this intensely competitive, pro-consumer free-for-all would be derailed by AT&T’s brilliant strategy to use the government to accomplish what it could not in the free market: eliminate its rivals. In 1907, Theodore Newton Vail became AT&T’s president. He had a clear vision: achieving “universal service” (in the form of interconnected and fully integrated systems) by eliminating rivals and consolidating networks. Befriending lawmakers and regulators was a crucial component of this strategy. While many policymakers nominally supported the idea of competition, they were more preoccupied with achieving widespread, interconnected network coverage. Vail capitalized on that impulse. On December 19, 1913, the government and AT&T reached the “Kingsbury Commitment.” Named after AT&T vice president Nathan C. Kingsbury, who helped negotiate the terms, the agreement outlined a plan whereby AT&T agreed not to acquire any other independent companies while also allowing other competitors to interconnect with the Bell System. The Kingsbury Commitment was thought to be pro-competitive, yet it was hardly an altruistic agreement on AT&T’s part. Regulators did not interpret the agreement so as to restrict AT&T from acquiring any new telephone systems, but only to require that an equal number be sold to an independent buyer for each system AT&T purchased. Hence, the Kingsbury Commitment contained a built-in incentive for network swapping (trading systems and solidifying territorial monopolies) rather than continued competition.  “The government solution, in short, was not the steamy, unsettling cohabitation that marks competition but rather a sort of competitive apartheid, characterized by segregation and quarantine,” observe telecom legal experts Michael Kellogg, John Thorne, and Peter Huber.  Thus, the move toward interconnection, while appearing to assist independent operators, actually allowed AT&T to gain greater control over the industry. “Vail chose at this time to put AT&T squarely behind government regulation, as the quid pro quo for avoiding competition,” explains [Richard] Vietor.  “This was the only politically acceptable way for AT&T to monopolize telephony,” he notes.  AT&T’s 1917 annual report confirms this fact, stating, “[with a] combination of like activities under proper control and regulation, the service to the public would be better, more progressive, efficient, and economical than competitive systems.”

So much for “the public interest”! If the last century’s worth of communications and media regulation teaches us anything, it’s that good intentions only get you so far in this world. Many of the lawmakers and regulators who allowed themselves to be duped by big corporations asking for protection from competition probably thought they were doing the right thing. Those policymakers may even have believed that they were actually encouraging innovation and competition through some of their regulatory actions. Alas, things did not turn out that way. We the public were denied real, meaningful choices and innovations because of these misguided policies.

And so now it’s Facebook’s turn to become part of this sordid tale. Zuckerberg has already made it clear that he is open to regulation and that his firm would also start enforcing new European data rules globally. And after this week’s political circus in Congress, the floodgates will be wide open and everyone’s regulatory pet peeve will be up for political consideration, which is exactly what happened for broadcasters and communications in past decades.

Every crackpot idea under the sun will be on the table but the most extreme versions of those proposals will be beaten back just enough to ensure that Facebook can offer up its pound of sacrificial flesh each time without running the risk of killing the patient entirely. Again, this was always part of the broadcast and communications regulatory playbook as well. So long as they were guaranteed a fairly stable market return and protection from pesky new innovators, the firms were willing to go along with the deal.

The “deal” in this case between Facebook and regulators won’t be so explicitly cronyist as it was for broadcasters and communications companies, however. The days of price controls, rate-of-return regulation, and formal line of business restrictions are likely over. Everyone now recognizes that regulations creating formal barriers to innovation and entry are a bad idea and, as a result, they are usually rejected.

But laws and regulations can sometimes create informal or hidden barriers to innovation and entry, even when they are well-intentioned. And that’s what could happen here as this latest Facebook fiasco leads to calls for seeming innocuous things like transparency and disclosures requirements, restrictions on “bad speech,” advertising and data collection regulations, “fiduciary” responsibilities, “algorithmic accountability” efforts, and so on. Facebook hasn’t wanted to adopt some of these things in the past, but now they’ll be pushed aggressively to do so by policymakers and regulatory activists. As Zuckerberg and Facebook cozy up with policymakers and regulatory activists and begin talking about a “broader view of responsibility,” the transition to the firm’s next phase as a quasi-public utility will get underway.

The rich irony of all this is that the same regulatory advocates who are cheering on this week’s developments as well as the coming regulatory avalanche will be the ones howling the loudest if and when only Facebook is left standing in the social media universe. In fact, that’s already happened in Europe where policymakers and their burdensome top-down data protection regulations have driven most digital innovators and investors to other continents, leaving only Facebook, Google, and handful of other (mostly U.S.-based) companies left to regulate. And then European policymakers have the audacity to cry foul about the market power of these firms! It boggles the mind how European policymakers and regulatory advocates see zero connection between their heavy-handed approach to the Digital Economy and the corresponding lack of enough competitors in those sectors.

But none of that will make any difference to the regulatory advocates. They want that pound of flesh, and they are going to get it. And then in Facebook they will have a regulatory plaything to toy with for years to come.

What about the public? Will we really be any better off because of any of this? How many people will want to stick with Facebook if it becomes a digital public utility or a social media version of the Post Office? That sure doesn’t sound like much fun for us. But if the new regulations imposed on Facebook do end up hurting smaller rivals more and create barriers to new entry and innovation going forward, then it’s unclear whether it makes any difference what we want because the options just won’t be there for us.

With time, Facebook will not only become more comfortable with its new regulatory status for that reason but then in the name of ensuring a “level playing field,” the firm will simultaneously advocate that each and every new rule be applied to all its rivals. Again, this is how well-intentioned regulation ends up indirectly discouraging the very innovation and competitive options that we need. Broadcasters and communications companies played the “level playing field” card at every juncture to beat down new technologies and rivals.

Finally, at some point, don’t be surprised if all roads lead back to prices for digital services. Right now, social networking services like Facebook are free-of-charge to consumers and digital companies use advertising to support their services. Many regulatory advocates have suggested that this sort of business model is fundamentally incompatible with privacy and have wanted it strictly curtail if not ended altogether. Of course, if you ask the public how many of them would be willing to pay $19.95 a month for Facebook, you won’t get many takers.

I wrote a couple of law review articles talking about the “privacy paradox” and consumer “willingness to pay” for privacy more generally. All the evidence suggests that consumer willingness to pay for privacy is significantly lower than privacy advocates would prefer. But if in the name protecting privacy, prices get pushed or imposed as a matter of public policy, then we will have entered a truly surreal moment in the history of regulatory policy because we will have inverted the presumption that consumer welfare is better served by lower prices. Over the past century, the purpose of most public utility regulation was lower prices, higher quality, and more choice. The modern Digital Economy has largely achieved those goals without heavy-handed regulation. But now, with the emerging regulatory regime looming for Facebook and social media more generally, we might end up with a sort of bizarro policy world in which we make people pay more in the name of making them better off!

I hope I’m wrong about everything I’ve said here. It would be troubling if we enter an era of less competition, less innovation, and lower quality information services. But to borrow a quote from my favorite sci-fi show, “all of this has happened before, and all of this will happen again.” And regulatory history tends to repeat. We shouldn’t be surprised, therefore, when some forget the ugly history of public utility-style regulation or broadcast era “public interest” mandates and we find ourselves stuck right back in the hole that we’ve been trying to dig ourselves out of for so many decades.

]]>
https://techliberation.com/2018/04/10/the-week-facebook-became-a-regulated-monopoly-and-achieved-its-greatest-victory-in-the-process/feed/ 3 76253
Bipartisan Digital Security Commission Is Only Way to Avoid PATRIOT-Style Legislative Panic https://techliberation.com/2016/02/29/bipartisan-digital-security-commission-is-only-way-to-avoid-patriot-style-legislative-panic/ https://techliberation.com/2016/02/29/bipartisan-digital-security-commission-is-only-way-to-avoid-patriot-style-legislative-panic/#respond Mon, 29 Feb 2016 20:13:43 +0000 https://techliberation.com/?p=75999

This article originally appeared at techfreedom.org.

Today, Rep. Michael McCaul (R-TX) and Sen. Mark Warner (D-VA) introduced legislation to create a blue ribbon commission that would examine the challenges encryption and other forms of digital security pose to law enforcement and national security. The sixteen-member commission will be made up of experts from law enforcement, the tech industry, privacy advocacy and other important stakeholders in the debate and will be required to present an initial report after six months and final recommendations within a year.

In today’s Tech Policy Podcast, TechFreedom President Berin Szoka and Ryan Hagemann, the Niskanen Center’s technology and civil liberties policy analyst, discussed the commission’s potential.

I see this commission as an ideal resting place for this debate,” Hagemann said. “Certainly what we’re trying to avoid is pushing through any sort of knee-jerk legislation that Senators Feinstein or Burr would propose, especially in the wake of a new terrorist attack.”

“I share the chairman’s concerns that since we’re not making any headway on these issues in the public forum, what is really needed here is for Congress to take some level of decisive action and get all of the people who have something to gain as well as something to lose in this debate to just sit down and talk through the issues that all parties have,” he continued.

I think it’s going to come out and say that there is no middle ground on end-to-end encryption, but it’s probably going to deal with the Apple situation very specifically,” Szoka said. “I think you’re going to see some standard that is going to be probably a little more demanding upon law enforcement than what law enforcement wants under the All Writs Act.”

]]>
https://techliberation.com/2016/02/29/bipartisan-digital-security-commission-is-only-way-to-avoid-patriot-style-legislative-panic/feed/ 0 75999
Global Leaders Must Support Strong Encryption https://techliberation.com/2016/01/13/global-leaders-must-support-strong-encryption/ https://techliberation.com/2016/01/13/global-leaders-must-support-strong-encryption/#comments Wed, 13 Jan 2016 19:38:49 +0000 http://techliberation.com/?p=75975

This article was originally posted on techfreedom.org

On January 11, TechFreedom joined nearly 200 organizations, companies, and experts from more than 40 countries in urging world leaders to support strong encryption and to reject any law, policy, or mandate that would undermine digital security. In France, India, the U.K, China, the U.S., and beyond, governments are considering legislation and other proposals that would undermine strong encryption. The letter is now open to public support and is hosted at https://www.SecureTheInternet.org.

The letter concludes:

Strong encryption and the secure tools and systems that rely on it are critical to improving cybersecurity, fostering the digital economy, and protecting users. Our continued ability to leverage the internet for global growth and prosperity and as a tool for organizers and activists requires the ability and the right to communicate privately and securely through trustworthy networks.

There’s no middle ground on encryption,” said Tom Struble, Policy Counsel at TechFreedom. “You either have encryption or you don’t. Any vulnerability imposed for government use can be exploited by those who seek to do harm. Privacy in communications means governments must not ban or restrict access to encryption, or mandate or otherwise pressure companies to implement backdoors or other security vulnerabilities into their products.”

]]>
https://techliberation.com/2016/01/13/global-leaders-must-support-strong-encryption/feed/ 2 75975
FTC’s Big Data Report Offers Little Analysis, Zero Economics https://techliberation.com/2016/01/07/ftcs-big-data-report-offers-little-analysis-zero-economics/ https://techliberation.com/2016/01/07/ftcs-big-data-report-offers-little-analysis-zero-economics/#comments Thu, 07 Jan 2016 20:43:23 +0000 http://techliberation.com/?p=75972

This article originally appeared at techfreedom.org

Yesterday, the FTC reiterated its age-old formula: there are benefits, there are risks, and here are some recommendations on what we regard as best practices. The report summarizes the workshop the agency held in October 2014, “Big Data: A Tool for Inclusion or Exclusion?”

Commissioner Ohlhausen issued a separate statement, saying the report gave “undue credence to hypothetical harms” and failed to “consider the powerful forces of economics and free-market competition,” which might avoid some of the hypothetical harms in the report.

The FTC is essentially saying, ‘there are clear benefits to Big Data and there may also be risks, but we have no idea how large they are,’” said Berin Szoka. “That’s not surprising, given that not a single economist participated in the FTC’s Big Data workshop. The report repeats a litany of ‘mights,’ ‘concerns’ and ‘worries’ but few concrete examples of harm from Big Data analysis — and no actual analysis. Thus, it does little to advance understanding of how to address real Big Data harms without inadvertently chilling forms of ‘discrimination’ that actually help underserved and minority populations.”

“Most notably,” continued Szoka, “the report makes much of a single news piece suggesting that Staples charged higher prices online to customers who lived farther away from a Staples store — which was cherry-picked precisely because it’s so hard to find examples where price discrimination results in higher prices for poor consumers. The report does not mention the obvious response: if consumers are shopping online anyway, comparison shopping is easy. So why would we think this would be an effective strategy for profit-maximizing firms?”

The FTC can do a lot better than this,” concluded Szoka. “The agency has an entire Bureau of Economics, which the Bureau of Consumer Protection stubbornly refuses to involve in its work — presumably out of the misguided notion that economic analysis is somehow anti-consumer. That’s dead wrong. As with previous FTC reports since 2009, this one’s ‘recommendations’ will have essentially regulatory effect. Moreover, the report announces that the FTC will bring Section 5 enforcement actions against Big Data companies that have ‘reason to know’ that their customers will use their analysis tools ‘for discriminatory purposes.’ That sounds uncontroversial, but all Big Data involves ‘discrimination’; the real issue is harmful discrimination, and that’s not going to be easy for Big Data platforms to assess. This kind of vague intermediary liability will likely deter Big Data innovations that could actually help consumers — like more flexible credit scoring.”

]]>
https://techliberation.com/2016/01/07/ftcs-big-data-report-offers-little-analysis-zero-economics/feed/ 1 75972
Wyndham Settlement Reinforces Need for Congressional Overhaul of FTC https://techliberation.com/2015/12/10/wyndham-settlement-reinforces-need-for-congressional-overhaul-of-ftc/ https://techliberation.com/2015/12/10/wyndham-settlement-reinforces-need-for-congressional-overhaul-of-ftc/#comments Thu, 10 Dec 2015 18:40:37 +0000 http://techliberation.com/?p=75964

This article originally appeared at techfreedom.org

WASHINGTON D.C. — Yesterday, the Federal Trade Commission announced that it had reached a settlement with Wyndham Hotels over charges that the company had “unreasonable” data security. In 2009, Russian hackers stole customer information, including credit card numbers, from Wyndham hotel systems. The company initially refused to settle an FTC enforcement action, becoming the first to challenge the FTC’s approach to data security in federal court. The FTC has used a decade of settlements with dozens of companies to establish fuzzy de facto standards for data security. In August, the Third Circuit denied Wyndham’s appeal of the district court’s decision to let the case proceed.

The FTC has, once again, avoided having a federal court definitively answer fundamental questions about the constitutionality of the FTC’s approach to data security,” said Berin Szoka, President of TechFreedom, which joined an amicus brief in the case. “The FTC will no doubt claim the Third Circuit vindicated its approach, but all the court really said was that Wyndham’s specific practices may have been unfair. Indeed, the appeals court agreed with Wyndham that the FTC’s so-called ‘common law of consent decrees’ cannot provide the ‘fair notice’ required by the Constitution’s Due Process clause. This implied that the FTC needs to do much more to guide companies on what ‘reasonable’ data security would be. By settling the case, the FTC avoided having the district court resolve those questions.”

It’ll take years for another case to work its way through the courts,” explained Szoka. “LabMD’srecent victory before the FTC’s chief administrative law judge is encouraging, and may allow a federal court to weigh in on the requirements of Section 5’s amorphous unfairness standard, if the full Commission overrules the ALJ. But that case focuses more on how the FTC weighs costs and benefits in each enforcement action than on the issue of how much guidance it provides guidance to industry.”

It’s high time Congress reasserted itself here,” concluded Szoka. “The FTC has demonstrated little willingness to change from within, and we can’t wait for the courts to address these questions. Congress needs to put the FTC on sounder footing across the board — from data security to privacy and other consumer protection issues. Far from hamstringing the agency, requiring better explanation of what the law requires and weighing of costs and benefits would actually help consumers — both by promoting better business practices and by avoiding FTC actions that end up harming consumers. Such common sense reforms should be bipartisan, just as they were back in 1980, the last time Congress really checked the FTC’s vast discretion.”

Szoka is co-author, along with Geoffrey Manne and Gus Hurwitz, of the FTC: Technology & Reform Project’s initial report, “Consumer Protection & Competition Regulation in a High-Tech World: Discussing the Future of the Federal Trade Commission,” which critiques the FTC’s processes and suggests areas where the FTC, the courts and Congress could improve how the FTC applies its sweeping unfairness and deception powers in data security, privacy and other cases, especially related to technology.

]]>
https://techliberation.com/2015/12/10/wyndham-settlement-reinforces-need-for-congressional-overhaul-of-ftc/feed/ 4 75964
Bipartisan Bill Can Help Avoid EU Internet Blockade https://techliberation.com/2015/10/21/bipartisan-bill-can-help-avoid-eu-internet-blockade/ https://techliberation.com/2015/10/21/bipartisan-bill-can-help-avoid-eu-internet-blockade/#respond Wed, 21 Oct 2015 14:04:51 +0000 http://techliberation.com/?p=75924

This article originally appeared at techfreedom.org

Today, the House voted to extend key, but narrow, privacy rights to citizens of “covered countries.” The Judicial Redress Act, passed by a voice vote, would allow the Attorney General to work with other federal agencies to determine countries whose citizens can enforce their data protection rights in U.S. courts under the Privacy Act of 1974. Since that statute specifically exempts sensitive issues regarding law enforcement and national security, extending Privacy Act rights to citizens of selected countries poses no significant concerns.

Today, the House took one small step toward repairing America’s tarnished image on data privacy,” said Berin Szoka, President of TechFreedom. “Since the Snowden disclosures, our government’s inaction on surveillance reform has provoked an international crisis — one that could lead to a European blockade of American Internet companies.”

Two weeks ago, in the Schrems case, the European Court of Justice struck down the Safe Harbor agreement that has, since 2000, allowed U.S. companies to receive and use data about European citizens. Lack of redress rights for Europeans is among the chief reasons why the ECJ found that the Commission had failed to update its finding that U.S. privacy protections were “adequate.”

Without a new agreement, U.S. companies will be at the mercy of each and every European Data Protection Authority, which, under Schrems, can now decide how to regulate cross-border data flows. This burden will likely fall heaviest on U.S. tech startups, who can ill afford this risk. If the Digital Protection Authorities (DPAs) start cracking down, American companies may simply decide to forego the European market, or to split their services into two pieces that don’t allow users to interact — especially new companies that haven’t yet launched their services. That, in turn, could mean a regionalization of what has, until now, been an inherently global medium.

Passage of the Judicial Redress Act is ‘table stakes’ for the U.S.,” continued Szoka. “Without it, the State Department will have no credibility at the bargaining table in negotiating with the Europeans over a replacement for Safe Harbor. However, Privacy Act rights are necessary but not sufficient: Congress will need to move on to other privacy reforms immediately, starting with ensuring that law enforcement must obtain a warrant before accessing stored data of both American and European citizens. Congress will also need to finish the surveillance reforms it started with USA FREEDOM, specifically regarding Section 702.”

We can be reached for comment at media@techfreedom.orgb>media@techfreedom.org</b. See more of our work on privacy, especially:

  • “Only Congressional Privacy Reforms Can Prevent  EU Internet Blockade of US,” a statement from TechFreedom on the ECJ striking down Safe Harbor
]]>
https://techliberation.com/2015/10/21/bipartisan-bill-can-help-avoid-eu-internet-blockade/feed/ 0 75924
White House Support for Strong Encryption Could Discourage Digital Protectionism https://techliberation.com/2015/10/09/white-house-support-for-strong-encryption-could-discourage-digital-protectionism/ https://techliberation.com/2015/10/09/white-house-support-for-strong-encryption-could-discourage-digital-protectionism/#comments Fri, 09 Oct 2015 14:57:38 +0000 http://techliberation.com/?p=75859

This Wednesday, TechFreedom joined Niskanen Center and a coalition of free market groups in urging the White House to endorse the use of strong encryption and disavow efforts to intentionally weaken encryption, whether by installing “back doors,” “front doors,” or any security vulnerabilities into encryption products.

The coalition letter concludes:

We urge your Administration to consider the full ramifications of weakening or limiting encryption. There is no such thing as a backdoor that only the US government can access: any attempt to weaken encryption means making users more vulnerable to malicious hackers, identity thieves, and repressive governments. America must stand for the right to encryption — it is nothing less than the Second Amendment for the Internet.

The White House’s silence on encryption is deafening,” said Tom Struble, Policy Counsel at TechFreedom. “The President’s hitherto failure to endorse strong encryption has given ammunition to European regulators seeking to restrict cross-border data flows and require that data on EU citizens be stored in their own countries. Just yesterday, the European Court of Justice struck down a longstanding agreement that made it easier for Europeans to access American Internet services. If the White House continues to dawdle, it will only further embolden ‘digital protectionism’ across the pond.”

The letter’s signatories include: Niskanen Center, TechFreedom, FreedomWorks, R Street Institute, Students For Liberty, Citizen Outreach, Downsize DC, Institute for Policy Innovation, Less Government, Center for Financial Privacy and Human Rights, and American Commitment.

]]>
https://techliberation.com/2015/10/09/white-house-support-for-strong-encryption-could-discourage-digital-protectionism/feed/ 2 75859
What We’ve Been Up To: An Update on TechFreedom https://techliberation.com/2015/10/06/what-weve-been-up-to-an-update-on-techfreedom/ https://techliberation.com/2015/10/06/what-weve-been-up-to-an-update-on-techfreedom/#comments Tue, 06 Oct 2015 15:34:03 +0000 http://techliberation.com/?p=75849

The last several months have been a busy time for tech policy. Major policies have been enacted, particularly in the areas of surveillance and Internet regulation. While we haven’t checked in here on TLF in some time,TechFreedom has been consistently fighting for the policies that make innovation possible.

  1. Internet Independence: On July 4th, we launched  the Declaration of Internet Independence , a grassroots petition campaign calling on Congress to restore the light-touch approach to Internet regulation that resulted in twenty years of growth and prosperity.
  2. Internet Regulation: This February the FCC issued its Open Internet Order, reclassifying broadbandas a communications service under Title II of the 1934 Communications Act, despite opposition from many in the tech sector, including supporters of our “ Don’t Break the Net ” campaign. In response, we’ve joined CARI.net and several leading internet entrepreneurs in litigation against the FCC   to ask the Court to strike down the Order.
  3. Surveillance: Section 215 of the PATRIOT Act, which authorized bulk collection of phone records, sunset this May, giving privacy advocates the opportunity to enact meaningful surveillance reform. TechFreedom voiced support for such reforms, including the USA FREEDOM Act, which will end all bulk collection of Americans’ telephone records under any authority.
  4. Broadband Deployment: Making fast, affordable Internet available to everyone is a goal that we all share. We’ve been urging government at all levels to make it easier for private companies to do just that through policies like Dig Once conduits , while cautioning that government-run broadband should only be a last resort.
  5. FTC Reform: The FTC is in dire need of reform. We’ve recommended changes to ensure that the agency fulfills its duty to protect consumers from real harm without a regulatory blank check, which stifles innovation and competition. While progress has been made , there’s still a long way to go. The agency can start by helping to unshackle the sharing economy from legacy regulations.
]]>
https://techliberation.com/2015/10/06/what-weve-been-up-to-an-update-on-techfreedom/feed/ 2 75849
Unintended Consequences of the EU Safe Harbor Ruling https://techliberation.com/2015/10/06/unintended-consequenses-of-the-eu-safe-harbor-ruling/ https://techliberation.com/2015/10/06/unintended-consequenses-of-the-eu-safe-harbor-ruling/#comments Tue, 06 Oct 2015 15:12:58 +0000 http://techliberation.com/?p=75831

The big news out of Europe today is that the European Court of Justice (ECJ) has invalidated the 15-year old EU-US safe harbor agreement, which facilitated data transfers between the EU and US. American tech companies have relied on the safe harbor to do business in the European Union, which has more onerous data handling regulations than the US. [PDF summary of decision here.] Below I offer some quick thoughts about the decision and some of its potential unintended consequences.

#1) Another blow to new entry / competition in the EU: While some pundits are claiming this is a huge blow to big US tech firms, in reality, the irony of the ruling is that it will bolster the market power of the biggest US tech firms, because they are the only ones that will be able to afford the formidable compliance costs associated with the resulting regulatory regime. In fact, with each EU privacy decision, Google, Facebook, and other big US tech firms just get more dominant. Small firms just can’t comply with the EU’s expanding regulatory thicket. “It will involve lots of contracts between lots of parties and it’s going to be a bit of a nightmare administratively,” said Nicola Fulford, head of data protection at the UK law firm Kemp Little when commenting on the ruling to the BBC. “It’s not that we’re going to be negotiating them individually, as the legal terms are mostly fixed, but it does mean a lot more paperwork and they have legal implications.” And by driving up regulatory compliance costs and causing constant delays in how online business is conducted, the ruling will (again, on top of all the others) greatly limits entry and innovation by new, smaller players in the digital world. In essence, EU data regulations have already wiped out much of the digital competition in Europe and now this ruling finishes off any global new entrants who might have hoped of breaking in and offering competitive alternatives. These are the sorts of stories never told in antitrust circles: costly government rulings often solidify and extend the market dominance of existing companies. Dynamic effects matter. That is certainly going to be the case here.

#2) Cross-border digital trade suffers: This conclusion follows from point #1, of course. Writing just before the decision was announced, lawyers as Norton Rose Fulbright’s Data Compliance Report blog noted that if the safe harbor was invalidated, “the impact on the world economy would be immense.” Well, here we are.  Dan Castro of ITIF hopes that EU and US officials can pull back from the brink of this impending disaster and “finish the process of creating a Safe Harbor 2.0 with terms that give comfort to all parties.” I suspect that many tech companies are hoping for the same miracle to occur. But don’t hold your breath. The Europeans have decided that this is the hill that they will die on. They haven’t shown too much interest in preserving an innovative tech market or enhancing global digital trade flows in the past due to heightened concerns about privacy, and there’s no reason to think they will back down now with a more measured approach. Importantly, as I noted in my earlier essay, “How Attitudes about Risk & Failure Affect Innovation on Either Side of the Atlantic,” this trans-Atlantic clash of vision transcends the debate over privacy law. It’s about broader cultural and political attitudes toward risk-taking and disruption. Most leaders in Europe value stability–both economic and cultural stability–more than US officials and citizens. This tension was always bound to reach a breaking point and the Digital Economy and data handling policies is where the you-know-what is finally hitting the fan.

#3) Web Balkanization accelerates: This is just another blow to the idea of a seamless global Internet. But as tech lawyer Tiffany C. Li pointed out on Twitter this morning in response to the decision, while Web pundits decry balkanization in other contexts, many of them seem to be cheering it on in this case because this decision deals with privacy and data regulation, which they favor more regulation of. But you can’t have your cake and eat it to. Indeed, the great irony of so many “Internet freedom” debates today is that pundits absolutely hate the idea of Internet control and Web balkanization… right up until the point where they absolutely love it! Think of this as the tech policy world’s selective morality problem. (I elaborated on these themes in my essays “When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed,” and “Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges.”)

//platform.twitter.com/widgets.js //platform.twitter.com/widgets.js //platform.twitter.com/widgets.js //platform.twitter.com/widgets.js

#4) But the big dogs won’t bolt out of Europe: But this should also be another reminder that there are no “John Galt moments” in the world of tech, as some tech libertarians hope. The biggest players won’t pack their bags and head home because there’s still too much money sitting on the table in Europe. Big firms will instead scramble to comply, just as they are trying to do with the so-called Right to Be Forgotten ruling. Of course, this just exacerbates problem #1 already discussed above: The big dogs stay and do their best to comply with the costly regulatory regime while smaller players get crushed by the rules and all the other potential new entrants just stay home.

#5) The decision ignores the real problem: widespread government surveillance: I don’t often find myself agreeing with Cory Doctorow on much, but he gets it exactly right when he notes that, “this doesn’t mean that Europeans won’t be subjected to mass surveillance, including mass surveillance by the NSA.” He elaborates:

If the European Court of Justice wants to end mass surveillance of Europeans, it can only do so by banning mass surveillance — by ruling that laws that treat foreigners’ data as fair game are unconstitutional. If US tech giants want to get loose from a farcical, expensive, and pointless exercise that continues to treat them as adjuncts to the world’s spy agencies, they need to lobby the US government to change the laws under which it treats foreigners as fair game.

Thus, it would certainly be nice if, as CDT suggested in response to the ruling, that the “EU Safe Harbour Ruling Should Reinvigorate Surveillance Reform Efforts.” Of course, that requires that tech companies muster the courage to stand up to public officials here in the States who always want them to (literally) hand over the keys to the kingdom. That’s why the current debate over crypto backdoors is so essential. It’s good to see a number of tech companies pushing back on that front and refusing to get rolled by law enforcement and national security agencies the way that far too many telecom and tech companies have been in the past. Following today’s ECJ ruling, tech companies are realizing just how serious this problem really is because now European officials are striking out against the safe harbor agreement as a surrogate for their general frustrations with US surveillance more generally. Indeed, in a press release following today’s ECJ ruling, the Internet Association, which represents major US tech firms, noted that, “The Internet industry has consistently supported surveillance reform” and the Association pushed for swift congressional action to clarify and limit existing surveillance powers. It remains to be seen whether the US tech sector and other related industries will be able to push back effectively against the growing surveillance state leviathan, but it’s more clear today than ever before why that’s a fight worth having.

]]>
https://techliberation.com/2015/10/06/unintended-consequenses-of-the-eu-safe-harbor-ruling/feed/ 1 75831
New ITIF Study on “Privacy Panics” https://techliberation.com/2015/09/11/new-itif-study-on-privacy-panics/ https://techliberation.com/2015/09/11/new-itif-study-on-privacy-panics/#comments Sat, 12 Sep 2015 02:02:16 +0000 http://techliberation.com/?p=75718

It was my pleasure this week to be invited to deliver some comments at an event hosted by the Information Technology and Innovation Foundation (ITIF) to coincide with the release of their latest study, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies.” The goal of the new ITIF report, which was co-authored by Daniel Castro and Alan McQuinn, is to highlight the dangers associated with “the cycle of panic that occurs when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on.” (p. 1)

As Castro and McQuinn describe it, the privacy panic cycle “charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.” They divide this cycle into four phases: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On. Here’s how they depict it in an image:

Privacy Panic Cycle - 1

 

The report can be seen as an extension of the literature on “moral panics” and “techno-panics.” Some relevant texts in this field include Stanley Cohen’s Folk Devils and Moral Panics, Erich Goode and Nachman Ben-Yehuda’s Moral Panics: The Social Construction of Deviance, Cass Sunstein’s Laws of Fear, and Barry Glassner’s Culture of Fear. But there’s a rich body of academic writing on this topic and I’ve tried to make a small contribution to this literature in recent years, most notably with a lengthy 2013 law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” In that paper, I try to connect the literature on moral panic theory (which mostly focuses on panics about speech and cultural changes) to other scholarship about how panics and threat inflation are used in many other contexts, including the fields of national security policy, cybersecurity, and more.

I define “technopanic,” as “intense public, political, and academic responses to the emergence or use of media or technologies, especially by the young.”  “Threat inflation” has been defined by national security policy experts Jane K. Cramer and A. Trevor Thrall as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”

Castro and McQuinn’s new study on privacy panic cycles fits neatly within this analytical framework and makes an important contribution to the literature. They warn of the real dangers associated with these privacy panics, especially in terms of lost opportunities for innovation. “Policymakers should not get caught up in the panics that follow in the wake of new technologies,” they argue, “and they should not allow hypothetical, speculative, or unverified claims to color the policies they put in place. Similarly, they should not allow unsubstantiated claims put forth by privacy fundamentalists to derail legitimate public sector efforts to use technology to improve society,” they say. (p. 28)

I think one of the most important takeaways from the study is that, as Castro and McQuinn note, “history has shown, many of the overinflated claims about loss of privacy have never materialized.” (p. 28) They identify many reasons why that may be the case but, most notably, they explain how societal attitudes often quickly adjust and also that “social norms dissuade many practices that are feasible but undesirable.” (p. 28) I have spent a lot of time thinking through this process of individual and social acclimation to new technologies and, most recently, wrote an essay on this topic entitled, “Muddling Through: How We Learn to Cope with Technological Change.”

Castro and McQuinn highlight several historical case studies that illustrate how privacy panics play out in practice. They include studies of photography, the transistor, and RFID tags. They also continue on to map out how various new technologies are currently—or might soon be—experiencing a privacy panic. Those include drones, facial recognition, connected cars, behavioral advertising, the Internet of Things and wearable tech. Here’s where Castro and McQuinn believe each of those technologies falls currently on the privacy panic curve.

 

Privacy Panic Cycle - 2

One problem with the ITIF report, however, is that it avoids the question of what constitutes a serious enough privacy “harm” that might be worth actually panicking over. Certainly there must be something that deserves special concern – perhaps even a little bit of panic. Of course, as I noted in my remarks at the event, this is problem with a great deal of literature in this field due to the challenge associated with defining what we even mean by “privacy” or “privacy harm.” Nonetheless, while some privacy fundamentalists are far too aggressive in using amorphous conceptions of privacy harms to fuel privacy panics, it can also be the case that others (like Castro, McQuinn, and myself) don’t do enough to specify when extremely serious privacy problems exist that warrant heightened concern.

The ITIF report rightly singles out the many groups that all too often use fear tactics and threat inflation to advance their own agendas. In the academic literature on moral panics, these people or groups are referred to as “fear entrepreneurs.” They hope to create and then take advantage of a state of fear to demand that “something must be done” about supposed problems that are often either greatly overstated or which will be solved (or just go away) over time. (For more on “fear entrepreneurs,” see Frank Furedi’s outstanding 2009 article on “Precautionary Culture and the Rise of Probabilistic Risk Assessment.”) These individuals and groups often end up having a disproportionate impact on policy debates and, through their vociferous activism, threaten to achieve a sort of “heckler’s veto” over digital innovation.

However, as I stressed in my remarks at ITIF’s launch event for the study, I believe that Castro and McQuinn were wrong to single out the International Association of Privacy Professionals (IAPP) as one of these troublemakers. Castro and McQuinn claim that “there is now a professional class of people whose job is to manage privacy risks and promote the idea that technology is becoming more invasive. These privacy professionals have a vested interest in inflating the perceived privacy risk of new technologies as their livelihood depends on businesses’ willingness to pay them to address these concerns.” (8)

I think that mischaracterizes the role that most IAPP-trained privacy professionals play today. I have done a lot of work with IAPP itself and many of the privacy professionals they have trained. In my experience, these folks aren’t trying to fan the flames of “privacy panics.” To the contrary, many (perhaps most) IAPP professionals are actively involved in putting out those fires or making sure that they do not start raging in the first place. This is particularly true of the huge number of IAPP-trained privacy professionals who work for major technology companies and who work hard every day to find practical solutions to real-world privacy and security-related concerns.

Of course, as with any large membership organization, one can find some IAPP-trained privacy professionals who may indeed be guilty of fueling privacy panics for personal or organizational purposes. After all, some IAPP-trained folks work for privacy advocacy organizations which could be classified as “privacy fundamentalists” in their philosophical orientation. But just because some IAPP-trained people play techno-panic games, it certainly doesn’t mean that most of them do.

Relatedly, another small nitpick I have with the ITIF study is that it groups together a large number of privacy and security-focused tech policy groups and implies that they are all equally guilty of fueling privacy panics. In reality, there is a small core group of individuals and advocacy organizations who are far more vociferous and extreme in their privacy panic rhetoric. Others may be guilty of that at times, but not nearly to the same extent as the most panicky Chicken Littles.

The only other problem I had with the study, and this is really quite a small matter, is that I would have liked to have seen some discussion about some strategies we might be able to employ to help counter privacy panics, or lessen the likelihood that they develop at all. In my own work, I have tried to develop constructive solutions to privacy and security-related concerns that might give rise to panics. Those solutions include things like education and tech literacy efforts, empowerment tools, transparency efforts, and so on. It’s also worth reminding concerned critics that there exists a broad range of existing legal remedies that can help address privacy concerns after the fact. These include torts and common law solutions, contractual remedies, class actions, other targeted legal solutions, and enforcement of “unfair and deceptive practices” by the Federal Trade Commission or state attorneys general. And there’s also important industry self-regulatory efforts and best practices that can help alleviate many of these privacy concerns. I would have liked to have seen the ITIF study address these or other potential solutions to privacy panics.

Overall, however, I thought that the ITIF report makes an important contribution to the literature in this field and provides us with a useful analytic framework to help us evaluate and critique privacy-related technopanics in the future.

The video of the launch event is below and the full paper can be found here. Also, for further reading on technopanics, see my compendium of 40 essays I have written on the topic.

]]>
https://techliberation.com/2015/09/11/new-itif-study-on-privacy-panics/feed/ 1 75718
The Challenge of Defining Privacy Harm https://techliberation.com/2015/06/19/the-challenge-of-defining-privacy-harm/ https://techliberation.com/2015/06/19/the-challenge-of-defining-privacy-harm/#respond Fri, 19 Jun 2015 18:12:30 +0000 http://techliberation.com/?p=75593

On Thursday, it was my great pleasure to participate in a Washington Legal Foundation (WLF) event on “Online Privacy Regulation: The Challenge of Defining Harm.” The entire event video can be found on YouTube here, but down below I pasted the clip of just my remarks. Other speakers at the event included:  FTC Commissioner Maureen K. Ohlhausen, Commissioner; John B. Morris, Jr., the Associate Administrator and Director of Internet Policy athe U.S. Department of Commerce’s National Telecommunications and Information Administration; and Katherine Armstrong, Counsel at the law firm of Hogan Lovells. Glenn Lammi of the WLF moderated the session.

My remarks drew upon a few recent law review articles I have published relating digital privacy debates to previous debates over free speech and online child safety issues. (Here are those articles: 1, 2, 3).

]]>
https://techliberation.com/2015/06/19/the-challenge-of-defining-privacy-harm/feed/ 0 75593
Autonomous Vehicles Under Attack: Cyber Dashboard Standards and Class Action Lawsuits https://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/ https://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/#respond Sat, 14 Mar 2015 13:06:08 +0000 http://techliberation.com/?p=75511

In a recent Senate Commerce Committee hearing on the Internet of Things, Senators Ed Markey (D-Mass.) and Richard Blumenthal (D-Conn.) “announced legislation that would direct the National highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) to establish federal standards to secure our cars and protect drivers’ privacy.” Spurred by a recent report from his office (Tracking and Hacking: Security and Privacy Gaps Put American Drivers at Risk) Markey argued that Americans “need the equivalent of seat belts and airbags to keep drivers and their information safe in the 21st century.”

Among the many conclusions reached in the report, it says, “nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” This comes across as a tad tautological given that everything from smartphones and computers to large-scale power grids are prone to being hacked, yet the Markey-Blumenthal proposal would enforce a separate set of government-approved, and regulated, standards for privacy and security, displayed on every vehicle in the form of a “Cyber Dashboard” decal.

Leaving aside the irony of legislators attempting to dictate privacy standards, especially in the post-Snowden world, it would behoove legislators like Markey and Blumenthal to take a closer look at just what it is they are proposing and ask whether such a law is indeed necessary to protect consumers. For security in particular, there may be concerns that require redress, but if one looks at the report, it becomes apparent that it lacks a very important feature:: no specific examples of real car hacking are mentioned. The only examples illustrated in the report are described in brief detail:

An application was developed by a third party and released for Android devices that could integrate with a vehicle through the Bluetooth connection. A security analysis did not indicate any ability to introduce malicious code or steal data, but the manufacturer had the app removed from the Google Play store as a precautionary measure.

Great! The company solved the problem. What about the other instance cited in the report?

Some individuals have attempted to reprogram the onboard computers of vehicles to increase engine horsepower or torque through the use of “performance chips”. Some of these devices plug into the mandated onboard diagnostic port or directly into the under-the-hood electronics system.

So the only two examples of “car hacking” described in the Markey report are essentially duds. The first is a non-issue, since the company (1) determined there was little security risk involved and (2) removed the item from the market anyways, just to be sure. The second is, in a sense, hacking, but it is individual car owners doing it to their own cars. Neither of these cases appears to be sufficient grounds for imposing a set of arbitrary and, in many cases, capriciously anti-innovation approaches to privacy and data security in cars.

In the wake of the report’s release, this past Tuesday, March 10, General Motors, Toyota, and Ford were all hit with a nationwide class action lawsuit, alleging that the companies concealed “dangers posed by a lack of electronic security in a vast swath of vehicles.” Specifically, the lawsuit is aimed at the presence of controller area network (CAN) buses, which act as data hubs between the various electronic systems in a car. These systems are, indeed, susceptible to hacking, but no more than any personal computer that is connected to the Internet.

The trouble with this lawsuit, brought by the Stanley Law Group, is that it has not cited any specific harms that have occurred as a result of this “defect” (as a side note, saying a computer being susceptible to hacking constitutes a defect in design is the equivalent of saying an airplane that is susceptible to lightning strikes is fundamentally defective). Rather, the plaintiffs argue that “[w]e shouldn’t need to wait for a hacker or terrorist to prove exactly how dangerous this is before requiring car makers to fix the defect.”

As Adam Thierer and I pointed out in our 2014 paper, Removing Roadblocks to Intelligent Vehicles and Driverless Cars:

Manufacturers have powerful reputational incentives at stake here, which will encourage them to continuously improve the security of their systems. Companies like Chrysler and Ford are already looking into improving their telematics systems to better compartmentalize the ability of hackers to gain access to a car’s controller-area-network bus. Engineers are also working to solve security vulnerabilities by utilizing two-way data-verification schemes (the same systems at work when purchasing items online with a credit card), routing software installs and updates through remote servers to check and double-check for malware, adopting of routine security protocols like encrypting files with digital signatures, and other experimental treatments. (pg. 40-41)

It’s always easy to see the potential for abuse and harm with any new emerging technology, but optimism and fortitude in the face of the uncertain is what helps society, and individuals, grow and progress. Car hacking, while certainly a viable concern, is not so ubiquitous that it necessitates a heavy-handed regulatory approach. Rather, we should permit various standards to emerge and attempt to deal with possible harms. In this way, we can experiment to properly determine what approaches work and what do not. Federal standards imposed from on high assume that firms and individuals are not capable of working through these murky issues. We should be a bit more optimistic about the human capacity for ingenuity and adaptability.

To end on something of a more optimistic note, Tom Vanderbilt of Wired magazine gives keen insight into the reality of regulating based on hypothetical scenarios:

Every scenario you can spin out of computer error – what if the car drives the wrong way – already exists in analog form, in abundance. Yes, computer-guidance systems and the rest will require advances in technology, not to mention redundancy and higher standards of performance, but at least these are all feasible, and capable of quantifiable improvement. On the other hand, we’ll always have lousy drivers.

 


 

Additional Reading 

]]>
https://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/feed/ 0 75511
Initial Thoughts on Obama Administration’s “Privacy Bill of Rights” Proposal https://techliberation.com/2015/02/27/initial-thoughts-on-obama-administrations-privacy-bill-of-rights-proposal/ https://techliberation.com/2015/02/27/initial-thoughts-on-obama-administrations-privacy-bill-of-rights-proposal/#comments Fri, 27 Feb 2015 21:28:30 +0000 http://techliberation.com/?p=75488

The Obama Administration has just released a draft “Consumer Privacy Bill of Rights Act of 2015.” Generally speaking, the bill aims to translate fair information practice principles (FIPPs) — which have traditionally been flexible and voluntary guidelines — into a formal set of industry best practices that would be federally enforced on private sector digital innovators. This includes federally-mandated Privacy Review Boards, approved by the Federal Trade Commission, the agency that will be primarily responsible for enforcing the new regulatory regime.

Many of the principles found in the Administration’s draft proposal are quite sensible as best practices, but the danger here is that they could soon be converted into a heavy-handed, bureaucratized regulatory regime for America’s highly innovative, data-driven economy.

No matter how well-intentioned this proposal may be, it is vital to recognize that restrictions on data collection could negatively impact innovation, consumer choice, and the competitiveness of America’s digital economy.

Online privacy and security is vitally important, but we should look to use alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.

That is why flexible data collection and use policies and evolving best practices will ultimately serve consumers better than one-size-fits all, top-down regulatory edicts. Instead of imposing these FIPPs in a rigid regulatory fashion, privacy and security best practices will need to evolve gradually to new marketplace realities and be applied in a more organic and flexible fashion, often outside the realm of public policy.

Regulatory approaches, like the Obama Administration’s latest proposal, will instead impose significant costs on consumers and the economy. Data is the fuel that powers our information economy. Privacy-related mandates that curtail the use of data to better target or personalize new services could raise costs for consumers. There is no free lunch. Something has to pay for all the wonderful free sites and services we enjoy today. If data can’t be used to cross-subsidize those services, prices will go up.

Data regulations could also indirectly cost consumers by diminishing the abundance of content and culture now supported by the data-driven economy. In other words, even if prices and paywalls don’t go up, quantity or quality could suffer if data collection is restricted.

Data regulations could also hurt the competitiveness of domestic markets and the global competitive advantage that America’s tech sector has in this space. That regulatory burden would fall hardest on smaller operators and new start-ups. Today’s “app economy” has given countless small innovators a chance to compete on even footing with the biggest players. Burdensome data collection restrictions could short-circuit the engine that drives entrepreneurial innovation among mom-and-pop companies if ad dollars get consolidated in the hands of only the larger companies that can afford to comply with new rules.

We don’t want to go down the path the European Union charted in the 1990s with heavy-handed data directives. That suffocated high-tech entrepreneurialism and innovation there. America’s Internet sector came to be the envy of the world because our more flexible, light-touch regulatory regime leaves more breathing room for competition and innovation compared to Europe’s top-down regime. We should not abandon that approach now.

Finally, the Obama Administration’s proposal deals exclusively with private sector data collection and has nothing to say about government surveillance activities. The Administration would be wise to channel its energies into that far more significant privacy problem first.


Additional Reading from Adam Thierer of the Mercatus Center

Law Review Articles:

Testimony / Filings

 

]]>
https://techliberation.com/2015/02/27/initial-thoughts-on-obama-administrations-privacy-bill-of-rights-proposal/feed/ 1 75488
Don’t Hit the (Techno-)Panic Button on Connected Car Hacking & IoT Security https://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/ https://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/#comments Tue, 10 Feb 2015 20:15:02 +0000 http://techliberation.com/?p=75425

do not panicOn Sunday night, 60 Minutes aired a feature with the ominous title, “Nobody’s Safe on the Internet,” that focused on connected car hacking and Internet of Things (IoT) device security. It was followed yesterday morning by the release of a new report from the office of Senator Edward J. Markey (D-Mass) called Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk,  which focused on connected car security and privacy issues. Employing more than a bit of techno-panic flare, these reports basically suggest that we’re all doomed.

On 60 Minutes, we meet former game developer turned Department of Defense “cyber warrior” Dan (“call me DARPA Dan”) Kaufman–and learn his fears of the future: “Today, all the devices that are on the Internet [and] the ‘Internet of Things’ are fundamentally insecure. There is no real security going on. Connected homes could be hacked and taken over.”

60 Minutes reporter Lesley Stahl, for her part, is aghast. “So if somebody got into my refrigerator,” she ventures, “through the internet, then they would be able to get into everything, right?” Replies DARPA Dan, “Yeah, that’s the fear.” Prankish hackers could make your milk go bad, or hack into your garage door opener, or even your car.

This segues to a humorous segment wherein Stahl takes a networked car for a spin. DARPA Dan and his multiple research teams have been hard at work remotely programming this vehicle for years. A “hacker” on DARPA Dan’s team proceeded to torment poor Lesley with automatic windshield wiping, rude and random beeps, and other hijinks. “Oh my word!” exclaims Stahl.

Never mind that we are told that the “hackers” who “hacked” into this car had been directly working on its systems for years—a luxury scarcely available to the shadowy malicious hackers about whom DARPA Dan and his team so hoped to frighten us. The careful setup, editing, and Lesley Stahl’s squeals made for convincing theater.

Then there’s the Markey report. On the surface, the findings appear grim. For instance, we are warned that “Nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” Nearly 100%? We’re practically naked out there! But digging through the report, we learn that the basis for this claim is that most of the 16 manufacturers surveyed responded that 100% of their vehicles are equipped with wireless entry points (WEPs)—like Bluetooth, Wi-Fi, navigation, and anti-theft features. Because these features “could pose vulnerabilities,” they are listed as a threat—one that lurks in nearly 100% of the cars on the market, at that.

Much of the report is similarly panicky and sometimes humorous (complaint #3: “many manufacturers did not seem to understand the questions posed by Senator Markey.”) The report concludes that the “alarmingly inconsistent and incomplete state of industry security and privacy practice,” warrants recommendations that federal regulators — led by the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) — “promulgate new standards that will protect the data, security and privacy of drivers in the modern age of increasingly connected vehicles.”

Take a Deep Breath

As we face an uncertain future full of rapidly-evolving technologies, it’s only natural that some might feel a little anxiety about how these new machines and devices operate. Despite the exaggerated and sometimes silly nature of techno-panic reports like these, they reflect many people’s real and understandable concerns about new technologies.

But the problem with these reports is that they embody a “panic-first” approach to digital security and privacy issues. It is certainly true that our cars are become rolling computers, complete with an arsenal of sensors and networking technologies, and the rise of the Internet of Things means almost everything we own or come into contact with will possess networking capabilities. Consequently, just as our current generation of computing and communications technologies are vulnerable to some forms of hacking, it is likely that our cars and IoT devices will be as well.

But don’t you think that automakers and IoT developers know that? Are we really to believe that journalists, congressmen, and DARPA Dan have a greater incentive to understand these issues than the manufacturers whose companies and livelihoods are on the line? And wouldn’t these manufacturers only take on these risks if consumer demand and expected value supported them? Watching the 60 Minutes spot and reading through the Markey report, one is led to think that innovators in this space are completely oblivious to these threats, simply don’t care enough to address them, and don’t have any plans in motion. But that is lunacy.

No Mention of Liability?

To begin, neither report even mentions the possibility of massive liability for future hacking attacks on connected cars or IoT devices. That is amazing considering how the auto industry already attracts an absolutely astonishing amount of litigation activity. (Ambulance-chasing is a full-time legal profession, after all.) Thus, to the extent that some automakers don’t want to talk about everything they are doing to address security issues, it’s likely because they are still figuring out how to address the various vulnerabilities out there without attracting the attention of either enterprising hackers or trial lawyers.

Nonetheless, contrary to the absurd statement by Mr. Kaufman that “There is no real security going on” for connected cars or the Internet of Things, the reality is that these are issues that developers are actively studying and trying to address. Manufacturers of connected devices know that: (1) nobody wants to own or use devices that are fundamentally insecure or dangerous; and (2) if they sell such devices to the public, they are in for a world of hurt once the trial lawyers see the first headlines about it.

It also still quite unclear how big the threat is here. Writing over at Forbes yesterday, Doug Newcomb notes that “the threat of car hacking has largely been overblown by the media – there’s been only one case of a malicious car hack, and that was an inside job by a disgruntled former car dealer employee. But it’s a surefire way to get the attention of the public and policymakers,” he correctly observes. Newcomb also interviewed Damon McCoy, an assistant professor of computer science at George Mason University and a car security researcher, who noted that car hacking hasn’t become prevalent and that “Given the [monetary] motivation of most hackers, the chance of [automotive hacking] is very low.”

Security is a Dynamic, Evolving Process

Regardless, the notion that we can just clean this whole device security situation up with a single set of federal standards, as the Markey report suggests, is appealing but fanciful. “Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts,” observed my Mercatus Center colleagues Eli Dourado and Andrea Castillo in their recent white paper on “Why the Cybersecurity Framework Will Make Us Less Secure.” “By prioritizing a set of rigid, centrally designed standards, policymakers are neglecting potent threats that are not yet on their radar,” Dourado and Castillo note elsewhere.

We are at the beginning of a long process. There is no final destination when it comes to security; it’s a never-ending process of devising and refining policies to address vulnerabilities on the fly. The complex problem of cybersecurity readiness requires dynamic solutions that properly align incentives, improve communication and collaboration, and encourage good personal and organizational stewardship of connected systems. Implementing the brittle bureaucratic standards that Markey and others propose could have the tragic unintended consequence of rendering our devices even less secure.

Standards Are Developing Rapidly

Meanwhile, the auto industry has already come up with privacy standards that go above and beyond what most other digital innovators apply to their own products today. Here are the Auto Alliance’s “Consumer Privacy Protection Principles: Privacy Principles for Vehicle Technologies and Services,” which 23 major automobile manufacturers agreed to abide by. And, according to a press release yesterday, “automakers are currently working to establish an Information Sharing Analysis Center (or “Auto-ISAC”) for sharing vehicle cybersecurity information among industry stakeholders.”

Again, progress continues and standards are evolving. This needs to be a flexible, evolutionary process, instead of a static, top-down, one-size-fits-all bureaucratic political proceeding.

We can’t set down security and privacy standards in stone for fast-moving technologies like these for another reason, and one I am constantly stressing in my work on “Why Permissionless Innovation Matters.” If we spend all our time worrying about hypothetical worst-case scenarios — and basing our policy interventions on a parade of hypothetical horribles — then we run the risk that best-case scenarios will never come about.  As analysts at the Center for Data Innovation correctly argue, policymakers should only intervene to address specific, demonstrated harms. “Attempting to erect precautionary regulatory barriers for purely speculative concerns is not only unproductive, but it can discourage future beneficial applications of the Internet of Things.” And the same is true for connected cars.

Trade-Offs Matter

Technopanic indulgence isn’t always merely silly or annoying—it can be deadly.

“During the four deadliest wars the United States fought in the 20th century, 39 percent more Americans were dying in motor vehicles” than on the battlefield. So writes Washington Post reporter Matt McFarland in a powerful new post today. The ongoing toll associated with human error behind the wheel is falling but remains absolutely staggering, with almost 100 people losing their lives and almost 6,500 people injured every day.

We must never fail to appreciate the trade-offs at work when we are pondering precautionary regulation. Ryan Hagemann and I wrote about these issues in our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption.

When it comes to the various security, privacy, and ethical considerations related to intelligent vehicles, Hagemann and I argue that they “need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

As I noted here in another recent essay, “anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms.”

No Mention of Alternative Solutions

Finally, it is troubling that neither the 60 Minutes segment nor the Markey report spend any time on alternative solutions to these problems. In my forthcoming law review article, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation,” I devote the second half of the 90-page paper to constructive solutions to the sort of complex challenges raised in the 60 Minutes segment and the Markey report.

Many of the solutions I discuss in that paper — such as education and awareness-building efforts, empowerment solutions, the development of new social norms, and so on – aren’t even touched on by the reports. That’s a real shame because those methods could go a long way toward helping to alleviate many of the issues the reports identify.

We need a better public dialogue than this about the future of connected cars and Internet of Things security. Political scare tactics and techno-panic journalism are not going to help make the world a safer place. In fact, by whipping up a panic and potentially discouraging innovation, reports such as these can actually serve to prevent critical, life-saving technologies that could change society for the better.


Additional Reading

 

]]>
https://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/feed/ 2 75425
Some Initial Thoughts on the FTC Internet of Things Report https://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/ https://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/#comments Wed, 28 Jan 2015 14:54:30 +0000 http://techliberation.com/?p=75351

Yesterday, the Federal Trade Commission (FTC) released its long-awaited report on “The Internet of Things: Privacy and Security in a Connected World.” The 55-page report is the result of a lengthy staff exploration of the issue, which kicked off with an FTC workshop on the issue that was held on November 19, 2013.

I’m still digesting all the details in the report, but I thought I’d offer a few quick thoughts on some of the major findings and recommendations from it. As I’ve noted here before, I’ve made the Internet of Things my top priority over the past year and have penned several essays about it here, as well as in a big new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology shortly. (Also, here’s a compendium of most of what I’ve done on the issue thus far.)

I’ll begin with a few general thoughts on the FTC’s report and its overall approach to the Internet of Things and then discuss a few specific issues that I believe deserve attention.

Big Picture, Part 1: Should Best Practices Be Voluntary or Mandatory?

Generally speaking, the FTC’s report contains a variety of “best practice” recommendations to get Internet of Things innovators to take steps to ensure greater privacy and security “by design” in their products. Most of those recommended best practices are sensible as general guidelines for innovators, but the really sticky question here continued to be this: When, if ever, should “best practices” become binding regulatory requirements?

The FTC does a bit of a dance when answering that question. Consider how, in the executive summary of the report, the Commission answers the question regarding the need for additional privacy and security regulation: “Commission staff agrees with those commenters who stated that there is great potential for innovation in this area, and that IoT-specific legislation at this stage would be premature.” But, just a few lines later, the agency (1) “reiterates the Commission’s previous recommendation for Congress to enact strong, flexible, and technology-neutral federal legislation to strengthen its existing data security enforcement tools and to provide notification to consumers when there is a security breach;” and (2) “recommends that Congress enact broad-based (as opposed to IoT-specific) privacy legislation.”

Here and elsewhere, the agency repeatedly stresses that it is not seeking IoT-specific regulation; merely “broad-based” digital privacy and security legislation. The problem is that once you understand what the IoT is all about you come to realize that this largely represents a distinction without a difference. The Internet of Things is simply the extension of the Net into everything we own or come into contact with. Thus, this idea that the agency is not seeking IoT-specific rule sounds terrific until you realize that it is actually seeking something far more sweeping: greater regulation of all online / digital interactions. And because “the Internet” and “the Internet of Things” will eventually (if they are not already) be considered synonymous, this notion that the agency is not proposing technology-specific regulation is really quite silly.

Now, it remains unclear whether there exists any appetite on Capitol Hill for “comprehensive” legislation of any variety – although perhaps we’ll learn more about that possibility when the Senate Commerce Committee hosts a hearing on these issues on February 11. But at least thus far, “comprehensive” or “baseline” digital privacy and security bills have been non-starters.

And that’s for good reason in my opinion: Such regulatory proposals could take us down the path that Europe charted in the late 1990s with onerous “data directives” and suffocating regulatory mandates for the IT / computing sector. The results of this experiment have been unambiguous, as I documented in congressional testimony in 2013. I noted there how America’s Internet sector came to be the envy of the world while it was hard to name any major Internet company from Europe. Whereas America embraced “permissionless innovation” and let creative minds develop one of the greatest success stories in modern history, the Europeans adopted a “Mother, May I” regulatory approach for the digital economy. America’s more flexible, light-touch regulatory regime leaves more room for competition and innovation compared to Europe’s top-down regime. Digital innovation suffered over there while it blossomed here.

That’s why we need to be careful about adopting the sort of “broad-based” regulatory regime that the FTC recommends in this and previous reports.

Big Picture, Part 2: Does the FTC Really Need More Authority?

Something else is going on in this report that has also been happening in all the FTC’s recent activity on digital privacy and security matters: The agency has been busy laying the groundwork for its own expansion.

In this latest report, for example, the FTC argues that

Although the Commission currently has authority to take action against some IoT-related practices, it cannot mandate certain basic privacy protections… The Commission has continued to recommend that Congress enact strong, flexible, and technology-neutral legislation to strengthen the Commission’s existing data security enforcement tools and require companies to notify consumers when there is a security breach.

In other words, this agency wants more authority. And we are talking about sweeping authority here that would transcend its already sweeping authority to police “unfair and deceptive practices” under Section 5 of the FTC Act. Let’s be clear: It would be hard to craft a law that grants an agency more comprehensive and open-ended consumer protection authority than Section 5. The meaning of those terms — “unfairness” and “deception” — has always been a contentious matter, and at times the agency has abused its discretion by exploiting that ambiguity.

Nonetheless, Sec. 5 remains a powerful enforcement tool for the agency and one that has been wielded aggressively in recently years to police digital economy giants and small operators alike. Generally speaking, I’m alright with most Sec. 5 enforcement, especially since that sort of retrospective policing of unfair and deceptive practices is far less likely to disrupt permissionless innovation in the digital economy. That’s because it does not subject digital innovators to the sort of “Mother, May I” regulatory system that European entrepreneurs face. But an expansion of the FTC’s authority via more “comprehensive, baseline” privacy and security regulatory policies threatens to convert America’s more sensible bottom-up and responsive regulatory system into the sort of innovation-killing regime we see on the other side of the Atlantic.

Here’s the other thing we can’t forget when it comes to the question of what additional authority to give the FTC over privacy and security matters: The FTC is not the end of the enforcement story in America. Other enforcement mechanism exist, including: privacy torts, class action litigation, property and contract law, state enforcement agencies, and other targeted privacy statutes. I’ve summarized all these additional enforcement mechanisms in my recent law review article referenced above. (See section VI of the paper.)

FIPPS, Part 1: Notice & Choice vs. Use-Based Restrictions

Next, let’s drill down a bit and examine some of the specific privacy and security best practices that the agency discusses in its new IoT report.

The FTC report highlights how the IoT creates serious tensions for many traditional Fair Information Practice Principles (FIPPs). The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. But the report is mostly focused on notice and choice as well as data minimization.

When it comes to notice and choice, the agency wants to keep hope alive that it will still be applicable in an IoT world. I’m sympathetic to this effort because it is quite sensible for all digital innovators to do their best to provide consumers with adequate notice about data collection practices and then give them sensible choices about it. Yet, like the agency, I agree that “offering notice and choice is challenging in the IoT because of the ubiquity of data collection and the practical obstacles to providing information without a user interface.”

The agency has a nuanced discussion of how context matters in providing notice and choice for IoT, but one can’t help but think that even they must realize that the game is over, to some extent. The increasing miniaturization of IoT devices and the ease with which they suck up data means that traditional approaches to notice and choice just aren’t going to work all that well going forward. It is almost impossible to envision how a rigid application of traditional notice and choice procedures would work in practice for the IoT.

Relatedly, as I wrote here last week, the Future of Privacy Forum (FPF) recently released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” that notes how FIPPs “are a valuable set of high-level guidelines for promoting privacy, [but] given the nature of the technologies involved, traditional implementations of the FIPPs may not always be practical as the Internet of Things matures.” That’s particularly true of the notice and choice FIPPS.

But the FTC isn’t quite ready to throw in the towel and make the complete move toward “use-based restrictions,” as many academics have. (Note: I have lengthy discussion of this migration toward use-based restrictions in my law review article in section IV.D.). Use-based restrictions would focus on specific uses of data that are particularly sensitive and for which there is widespread agreement they should be limited or disallowed altogether. But use-based restrictions are, ironically, controversial from both the perspective of industry and privacy advocates (albeit for different reasons, obviously).

The FTC doesn’t really know where to go next with use-based restrictions. The agency says that, on one hand, “has incorporated certain elements of the use-based model into its approach” to enforcement in the past. On the other hand, the agency says it has concerns “about adopting a pure use-based model for the Internet of Things,” since it may not go far enough in addressing the growth of more widespread data collection, especially of more sensitive information.

In sum, the agency appears to be keeping the door open on this front and hoping that a best-of-all-worlds solution miraculously emerges that extends both notice and choice and use-based limitations as the IoT expands. But the agency’s new report doesn’t give us any sort of blueprint for how that might work, and that’s likely for good reason: because it probably won’t work at that well in practice and there will be serious costs in terms of lost innovation if they try to force unworkable solutions on this rapidly evolving marketplace.

FIPPS, Part 2: Data Minimization

The biggest policy fight that is likely to come out of this report involves the agency’s push for data minimization. The report recommends that, to minimize the risks associated with excessive data collection:

companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. However, recognizing the need to balance future, beneficial uses of data with privacy protection, staff’s recommendation on data minimization is a flexible one that gives companies many options. They can decide not to collect data at all; collect only the fields of data necessary to the product or service being offered; collect data that is less sensitive; or deidentify the data they collect. If a company determines that none of these options will fulfill its business goals, it can seek consumers’ consent for collecting additional, unexpected categories of data…

This is an unsurprising recommendation in light of the fact that, in previous major speeches on the issue, FTC Chairwoman Edith Ramirez argued that, “information that is not collected in the first place can’t be misused,” and that:

The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the off chance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value.

In my forthcoming law review article, I discussed the problem with such reasoning at length and note:

if Chairwoman Ramirez’s approach to a preemptive data use “commandment” were enshrined into a law that said, “Thou shall not collect and hold onto personal information unnecessary to an identified purpose.” Such a precautionary limitation would certainly satisfy her desire to avoid hypothetical worst-case outcomes because, as she noted, “information that is not collected in the first place can’t be misused,” but it is equally true that information that is never collected may never lead to serendipitous data discoveries or new products and services that could offer consumers concrete benefits. “The socially beneficial uses of data made possible by data analytics are often not immediately evident to data subjects at the time of data collection,” notes Ken Wasch, president of the Software & Information Industry Association. If academics and lawmakers succeed in imposing such precautionary rules on the development of IoT and wearable technologies, many important innovations may never see the light of day.

FTC Commissioner Josh Wright issued a dissenting statement to the report that lambasted the staff for not conducting more robust cost-benefit analysis of the new proposed restrictions, and specifically cited how problematic the agency’s approach to data minimization was. “[S]taff merely acknowledges it would potentially curtail innovative uses of data. . . [w]ithout providing any sense of the magnitude of the costs to consumers of foregoing this innovation or of the benefits to consumers of data minimization,” he says. Similarly, in her separate statement, FTC Commissioner Maureen K. Ohlhausen worried about the report’s overly precautionary approach on data minimization when noting that, “without examining costs or benefits, [the staff report] encourages companies to delete valuable data — primarily to avoid hypothetical future harms. Even though the report recognizes the need for flexibility for companies weighing whether and what data to retain, the recommendation remains overly prescriptive,” she concludes.

Regardless, the battle lines have been drawn by the FTC staff report as the agency has made it clear that it will be stepping up its efforts to get IoT innovators to significantly slow or scale back their data collection efforts. It will be very interesting to see how the agency enforces that vision going forward and how it impacts innovation in this space. All I know is that the agency has not conducted a serious evaluation here of the trade-offs associated with such restrictions. I penned another law review article last year offering “A Framework for Benefit-Cost Analysis in Digital Privacy Debates” that they could use to begin that process if they wanted to get serious about it.

The Problem with the “Regulation Builds Trust” Argument

One of the interesting things about this and previous FTC reports on privacy and security matters is how often the agency premises the case for expanded regulation on “building trust.” The argument goes something like this (as found on page 51 of the new IoT report): “Staff believes such legislation will help build trust in new technologies that rely on consumer data, such as the IoT. Consumers are more likely to buy connected devices if they feel that their information is adequately protected.”

This is one of those commonly-heard claims that sounds so straight-forward and intuitive that few dare question it. But there are problems with the logic of the “we-need-regulation-to-build-trust-and boost adoption” arguments we often hear in debates over digital privacy.

First, the agency bases its argument mostly on polling data. “Surveys also show that consumers are more likely to trust companies that provide them with transparency and choices,” the report says. Well, of course surveys say that! It’s only logical that consumers will say this, just as they will always say they value privacy and security more generally when asked. You might as well ask people if they love their mothers!

But what consumers claim to care about and what they actually do in the real-world are often two very different things. In the real-world, people balance privacy and security alongside many other values, including choice, convenience, cost, and more. This leads to the so-called “privacy paradox,” or the problem of many people saying one thing and doing quite another when it comes to privacy matters. Put simply, people take some risks — including some privacy and security risks — in order to reap other rewards or benefits. (See this essay for more on the problem with most privacy polls.)

Second, online activity and the Internet of Things are both growing like gangbusters despite the privacy and security concerns that the FTC raises. Virtually every metric I’ve looked at that track IoT activity show astonishing growth and product adoption, and projections by all the major consultancies that have studied this consistently predict the continued rapid growth of IoT activity. Now, how can this be the case if, as the FTC claims, we’ll only see the IoT really take off after we get more regulation aimed at bolstering consumer trust? Of course, the agency might argue that the IoT will grow at an even faster clip than it is right now, but there is no way to prove one way or the other. In any event, the agency cannot possible claim that the IoT isn’t already growing at a very healthy clip — indeed, a lot of the hand-wringing the staff engages in throughout the report is premised precisely on the fact that the IoT is exploding faster that our ability to keep up with it!! In reality, it seems far more likely that cost and complexity are the bigger impediments to faster IoT adoption, just as cost and complexity have always been the factors weighing most heavily on the adoption of other digital technologies.

Third, let’s say that the FTC is correct – and it is – when it says that a certain amount of trust is needed in terms of IoT privacy and security before consumers are willing to use more of these devices and services in their everyday lives. Does the agency imagine that IoT innovators don’t know that? Are markets and consumers completely irrational? The FTC says on page 44 of the report that, “If a company decides that a particular data use is beneficial and consumers disagree with that decision, this may erode consumer trust.” Well, if such a mismatch does exist, then the assumption should be that consumers can and will push back, or seek out new and better options. And other companies should be able to sense the market opportunity here to offer a more privacy-centric offering for those consumers who demand it in order to win their trust and business.

Finally, and perhaps most obviously, the problem with the argument that increased regulation will help IoT adoption is that it ignores how the regulations put in place to achieve greater “trust” might become so onerous or costly in practice that there won’t be as many innovations for us to adopt to begin with! Again, regulation — even very well-intentioned regulation — has costs and trade-offs.

In any event, if the agency is going to premise the case for expanded privacy regulation on this notion, they are going to have to do far more to make their case besides simply asserting it.

Once Again, No Appreciation of the Potential for Societal Adaptation

Let’s briefly shift to a subject that isn’t discussed in the FTC’s new IoT report at all.

Regular readers may get tired of me making this point, but I feel it is worth stressing again: Major reports and statements by public policymakers about rapidly-evolving emerging technologies are always initially prone to stress panic over patience. Rarely are public officials willing to step-back, take a deep breath, and consider how a resilient citizenry might adapt to new technologies as they gradually assimilate new tools into their lives.

That is really sad, when you think about it, since humans have again and again proven capable of responding to technological change in creative ways by adopting new personal and social norms. I won’t belabor the point because I’ve already written volumes on this issue elsewhere. I tried to condense all my work into a single essay entitled, “Muddling Through: How We Learn to Cope with Technological Change.” Here’s the key takeaway:

humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Again, you almost never hear regulators or lawmakers discuss this process of individual and social adaptation even though they must know there is something to it. One explanation is that every generation has their own techno-boogeymen and lose faith in the ability of humanity to adapt to it.

To believe that we humans are resilient, adaptable creatures should not be read as being indifferent to the significant privacy and security challenges associated with any of the new technologies in our lives today, including IoT technologies. Overly-exuberant techno-optimists are often too quick to adopt a “Just-Get-Over-It!” attitude in response to the privacy and security concerns raised by others. But it is equally unforgivable for those who are worried about those same concerns to utterly ignore the reality of human adaptation to new technologies realities.

Why are Educational Approaches Merely an Afterthought?

One final thing that troubled me about the FTC report was the way consumer and business education is mostly an afterthought. This is one of the most important roles that the FTC can and should play in terms of explaining potential privacy and security vulnerabilities to the general public and product developers alike.

Alas, the agency devotes so much ink to the more legalistic questions about how to address these issues, that all we end up with in the report is this one paragraph on consumer and business education:

Consumers should understand how to get more information about the privacy of their IoT devices, how to secure their home networks that connect to IoT devices, and how to use any available privacy settings. Businesses, and in particular small businesses, would benefit from additional information about how to reasonably secure IoT devices. The Commission staff will develop new consumer and business education materials in this area.

I applaud that language, and I very much hope that the agency is serious about plowing more effort and resources into developing new consumer and business education materials in this area. But I’m a bit shocked that the FTC report didn’t even bother mentioning the excellent material already available on the “On Guard Online” website it helped created with a dozen other federal agencies. Worse yet, the agency failed to highlight the many other privacy education and “digital citizenship” efforts that are underway today to help on this front. I discuss those efforts in more detail in the closing section of my recent law review article.

I hope that the agency spends a little more time working on the development of new consumer and business education materials in this area instead of trying to figure out how to craft a quasi-regulatory regime for the Internet of Things. As I noted last year in this Maine Law Review article, that would be a far more productive use of the agency’s expertise and resources. I argued there that “policymakers can draw important lessons from the debate over how best to protect children from objectionable online content” and apply them to debates about digital privacy. Specifically, after a decade of searching for legalistic solutions to online safety concerns — and convening a half-dozen blue ribbon task forces to study the issue — we finally saw a rough consensus emerge that no single “silver-bullet” technological solutions or legal quick-fixes would work and that, ultimately, education and empowerment represented the better use of our time and resources. What was true for child safety is equally true for privacy and security for the Internet of Things.

It’s a shame the FTC staff squandered the opportunity it had with this new report to highlight all the good that could be done by getting more serious about focusing first on those alternative, bottom-up, less costly, and less controversial solutions to these challenging problems. One day we’ll all wake up and realize that we spent a lost decade debating legalistic solutions that were either technically unworkable or politically impossible. Just imagine if all the smart people who were spending all their time and energy on those approaches right now were instead busy devising and pushing educational and empowerment-based solutions instead!

One day we’ll get there. Sadly, if the FTC report is any indication, that day is still a ways off.

]]>
https://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/feed/ 2 75351
Striking a Sensible Balance on the Internet of Things and Privacy https://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/ https://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/#comments Fri, 16 Jan 2015 21:08:39 +0000 http://techliberation.com/?p=75274

FPF logoThis week, the Future of Privacy Forum (FPF) released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” which I believe can help us find policy consensus regarding the privacy and security concerns associated with the Internet of Things (IoT) and wearable technologies. I’ve been monitoring IoT policy developments closely and I recently published a big working paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will appear shortly in the Richmond Journal of Law & Technology. I have also penned several other essays on IoT issues. So, I will be relating the FPF report to some of my own work.

The new FPF report, which was penned by Christopher Wolf, Jules Polonetsky, and Kelsey Finch, aims to accomplish the same goal I had in my own recent paper: sketching out constructive and practical solutions to the privacy and security issues associated with the IoT and wearable tech so as not to discourage the amazing, life-enriching innovations that could flow from this space. Flexibility is the key, they argue. “Premature regulation at an early stage in wearable technological development may freeze or warp the technology before it achieves its potential, and may not be able to account for technologies still to come,” the authors note. “Given that some uses are inherently more sensitive than others, and that there may be many new uses still to come, flexibility will be critical going forward.” (p. 3)

That flexible approach is at the heart of how the FPF authors want to see Fair Information Practice Principles (FIPPs) applied in this space. The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. The FPF authors correctly note that,

The FIPPs do not establish specific rules prescribing how organizations should provide privacy protections in all contexts, but rather provide high-level guidelines. Over time, as technologies and the global privacy context have changed, the FIPPs have been presented in different ways with different emphases. Accordingly, we urge policymakers to enable the adaptation of these fundamental principles in ways that reflect technological and market developments. (p. 4)

They continue on to explain how each of the FIPPS can provide a certain degree of general guidance for the IoT and wearable tech, but also caution that: “A rigid application of the FIPPs could inhibit these technologies from even functioning, and while privacy protections remain essential, a degree of flexibility will be key to ensuring the Internet of Things can develop in ways that best help consumer needs and desires.” (p. 4) And throughout the report, the FPF authors stress the need for the FIPPS to be “practically applied” and they nicely explain how the appropriate application of any particular one of the FIPPS “will depend on the circumstances.”  For those reasons, they conclude by saying, “we urge policymakers to adopt a forward-thinking, flexible application of the FIPPs.” (p. 11)

The approach that Wolf, Polonetsky, and Finch set forth in this new FPF report is very much consistent with the policy framework I sketched out in my forthcoming law review article. “The need for flexibility and adaptability will be paramount if innovation is to continue in this space,” I argued. In essence, best practices need to remain just that: best practicesnot fixed, static, top-down regulatory edicts. As I noted:

Regardless of whether they will be enforced internally by firms or by ex post FTC enforcement actions, best practices must not become a heavy-handed, quasi-regulatory straitjacket. A focus on security and privacy by design does not mean those are the only values and design principles that developers should focus on when innovating. Cost, convenience, choice, and usability are all important values too. In fact, many consumers will prioritize those values over privacy and security — even as activists, academics, and policymakers simultaneously suggest that more should be done to address privacy and security concerns. Finally, best practices for privacy and security issues will need to evolve as social acceptance of various technologies and business practices evolve. For example, had “privacy by design” been interpreted strictly when wireless geolocation capabilities were first being developed, these technologies might have been shunned because of the privacy concerns they raised. With time, however, geolocation technologies have become a better understood and more widely accepted capability that consumers have come to expect will be embedded in many of their digital devices.  Those geolocation capabilities enable services that consumers now take for granted, such as instantaneous mapping services and real-time traffic updates. This is why flexibility is crucial when interpreting the privacy and security best practices.

The only thing I think that was missing from the FPF report was a broader discussion of other constructive privacy and security solutions that involve education, etiquette, and empowerment-based solutions. I would have also liked to have seen some discussion of how other existing legal mechanisms — privacy torts, contractual enforcement mechanisms, property rights, state “peeping Tom” law, and existing privacy statutes — might cover some of the hard cases that could develop on this front. I discuss those and other “bottom-up” solutions in Section IV of my law review article and note that they can contribute to the sort of “layered” approach we need to address privacy and security concerns for the IoT and wearable tech.

In any event, I encourage everyone to check out the new Future of Privacy Forum report as well as the many excellent best practice guidelines they have put together to help innovators adopt sensible privacy and security best practices. FPF has done some great work on this front.

Additional Reading

]]>
https://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/feed/ 3 75274
Dispatches from CES 2015 on Privacy Implications of New Technologies https://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/ https://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/#comments Thu, 15 Jan 2015 19:22:30 +0000 http://techliberation.com/?p=75266

Over at the International Association of Privacy Professionals (IAPP) Privacy Perspectives blog, I have two “Dispatches from CES 2015” up. (#1 & #2) While I was out in Vegas for the big show, I had a chance to speak on a panel entitled, “Privacy and the IoT: Navigating Policy Issues.” (Video can be found here. It’s the second one on the video playlist.) Federal Trade Commission (FTC) Chairwoman Edith Ramirez kicked off that session and stressed some of the concerns she and others share about the Internet of Things and wearable technologies in terms of the privacy and security issues they raise.

Before and after our panel discussion, I had a chance to walk the show floor and take a look at the amazing array of new gadgets and services that will soon hitting the market. A huge percentage of the show floor space was dedicated to IoT technologies, and wearable tech in particular. But the show also featured many other amazing technologies that promise to bring consumers a wealth of new benefits in coming years. Of course, many of those technologies will also raise privacy and security concerns, as I noted in my two essays for IAPP. The first of my dispatches focuses primarily on the Internet of Things and wearable technologies that I saw at CES.  In my second dispatch, I discuss the privacy and security implications of the increasing miniaturization of cameras, drone technologies, and various robotic technologies (especially personal care robots).

I open the first column by noting that “as I was walking the floor at this year’s massive CES 2015 tech extravaganza, I couldn’t help but think of the heartburn that privacy professionals and advocates will face in coming years.” And I close the second dispatch by concluding that, “The world of technology is changing rapidly and so, too, must the role of the privacy professional. The technologies on display at this year’s CES 2015 make it clear that a whole new class of concerns are emerging that will require IAPP members to broaden their issue set and find constructive solutions to the many challenges ahead.” Jump over to the Privacy Perspectives blog to read more.

]]>
https://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/feed/ 2 75266
Government Surveillance: Is It Time for Another Church Committee? https://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/ https://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/#comments Wed, 17 Dec 2014 21:32:29 +0000 http://techliberation.com/?p=75085

This morning, a group of organizations led by the Center for Responsibility and Ethics in Washington (CREW), R Street, and the Sunlight Foundation released a public letter to House Speaker John Boehner and Minority Leader Nancy Pelosi calling for enhanced congressional oversight of U.S. national security surveillance policies.

The letter—signed by over fifty organizations, ranging from the Electronic Frontier Foundation, the Competitive Enterprise Institute, and the Brennan Center for Justice at the New York University School of Law, and a handful of individuals, including Pentagon Papers whistleblower Daniel Ellsberg—expresses deep concerns about the expansive scope and limited accountability of intelligence activities and agencies, famously exposed by whistleblower Edward Snowden in 2013. The letter states:

Congress is responsible for authorizing, overseeing, and funding these programs. In recent years, however, the House of Representatives has not always effectively performed its duties. The time for modernization is now. When the House convenes for the 114th Congress in January and adopts rules, the House should update them to enhance opportunities for oversight by House Permanent Select Committee on Intelligence (“HPSCI”) members, members of other committees of jurisdiction, and all other representatives. The House should also consider establishing a select committee to review intelligence activities since 9/11. We urge the following reforms be included in the rules package.

The proposed modernization reforms include:

1) modernizing HPSCI membership to more accurately reflect House interests by allowing chairs and ranking members of other committees with intelligence jurisdiction to select a designee on HPSCI;

2) allowing each HPSCI Member to designate a staff member of his or her choosing to represent their interests on the committee, as is the practice in the Senate;

3) making all unclassified intelligence reports quickly available to the public;

4) improving HPSCI the speed and transparency of responsiveness to member requests for information; and

5) improving general HPSCI transparency by better informing members of relevant activities like upcoming closed hearings, legislative markups, and committee activities

The groups also urge reforms to empower all members of Congress to be informed of and involved with executive intelligence agencies’ activities. They are:

1) making all communications from the executive branch available to all Members unless the sender explicitly indicates otherwise;

2) reaffirming Members’ abilities to access, review, and publicly discuss materials already available to the public that are classified by the executive branch, as is the case with the Snowden leaks. Members should feel comfortable to discuss this kind of information without fear of reprimand;

3) providing Members with at least one staff member with access to classified information through a Top Secret/Special Compartmented Information (TS/SCI) clearance;

4) allowing Members to speak with whistleblowers without fear of reprisal; and

5) improving training for Members and staff on how to handle classified information and conduct effective congressional oversight of classified matters.

Over at the CREW blogDaniel Schuman provides some more context of the problems these groups seek to address:

Members of Congress rely on staff to do a lot of work, but most staff working on intelligence issues are not permitted to hold the necessary security clearances to do their jobs. Sometimes, the Intelligence Committee in the House intercepts mail from the executive branch addressed to all members of Congress. That same committee sits on unclassified reports, refusing to make them available to the public. Briefings provided by the intelligence community are announced for inconvenient times, do not provide enough detailed information, and members of Congress often are not allowed to take notes on what was said. The executive branch has 666,000 employees with top secret/SCI clearance and 541,000 contractors with top secret/SCI clearance, and yet often times members of Congress are not permitted to talk with one another about their briefings. Members of Congress are not allowed to publicly speak about—and staff may not read—classified information that has been published in the newspaper or on the internet. This makes no sense for the deliberative body that was designed as a check on executive power.

While these proposed reforms aim to improve congressional oversight through common-sense changes or clarifications in House procedure and committee structure, these still only address failures of intelligence oversight that we have gleaned from our current knowledge of the byzantine maze of surveillance agency activities so far. The picture painted by the little knowledge that have right now is not pretty. An associated white paper presenting the reforms in more detail notes:

The last decade-and-a-half has witnessed major intelligence community failures. From the inability to connect the dots on 9/11 to false claims about weapons of mass destruction in Iraq, from the unlawful commission of torture to the inability to predict the Arab spring, from lying to Congress about the NSA to CIA surveillance of Senate staff, the intelligence community has a credibility gap. Moreover, with recent revelations about secret government activities, to the apparent surprise of many members of Congress, it is increasingly clear that Congress has not engaged in effective oversight of the intelligence community .

To get a fuller picture of the extent of the problem, the letter proposes that the House adopt a special committee to conduct a distinct, broad-based review of the activities of the intelligence community after 9/11. Similar committees have been assembled in the past to address previous shortcomings:

The last time so many revelations of government misdeeds came to light in news reports, Congress reacted by forming two special committees to investigate intelligence community activities. The reports by the Church and Pike Committees led to wholesale reforms of the intelligence community , including improving congressional oversight mechanisms. The magnitude of current revelations and intelligence community failures leads to this conclusion: the House (and Senate) must establish a distinct, broad-based review of the activities of the intelligence community since 9/11. The House should establish a committee modeled after the Church or Pike Committees, provide it adequate staffing and financial support, and give it a broad mandate to review intelligence community activities, engage in public reporting wherever possible, and issue recommendations for reform.

The Church and Pike Committees of the 1970’s were products of a decade of explosive revelations of government surveillance run amok. The white paper cites a 1974 New York Times exclusive report by Seymour Hersh that revealed the CIA had been operationalized to inspect the mail, telephone communications, and residences of tens of thousands of uncharged private citizens since the 1950’s. Earlier that year, allegations that the U.S. Army had been performing illegal surveillance of American citizens were verified and repudiated by Senator Sam Ervin’s Military Surveillance Investigations. In 1975, a bombshell NSA investigation published by the Times reported that the then largely-unknown intelligence unit “eavesdrops on virtually all cable, Telex, and other nontelephone communications leaving and entering the United States” and “uses computers to sort out and obtain intelligence from the contents” in the now-infamous Project Shamrock. The revealed executive abuses of the Nixon administration provided the cherry on top of a growing distrust and anger with surreptitious U.S. surveillance practices.

Today is another era of outrageous whitstleblower reports and rapidly dwindling trust in U.S. surveillance bodies. A mere 24 percent of Americans reported that they trust the government to “do the right thing” most of the time in 2013 Rasmussen poll. (A miniscule 4 percent of your fellow Pollyanna patriots trust Uncle Sam all of the time.) Meanwhile, technological advances have allowed U.S. intelligence agencies a greater degree of potential (and, as Snowden revealed, actual) surveillance than every before. This gap in trust and power simply cannot continue indefinitely.

While not without their problems, the Church and Pike committees are noteworthy milestones in reclaiming congressional accountability over executive intelligence agencies run amok. Creating a new committee to comprehensively assess current surveillance agency activities, warts and all, and recommend accountability measures to address the unknown excesses that likely lurk in the shadows is one step in the right direction toward taming back the tentacles of unlawful government surveillance.

But if there’s one thing we’ve learned from the fruits of the 1970’s committees—namely, the Foreign Foreign Intelligence Surveillance Act (FISA) of 1978—it’s that what once served as a hindrance to government abuses may one day become a party to it. For example, the Foreign Intelligence Surveillance Court (FISC) established by FISA that was intended to provide critical oversight of federal spying programs is today limited by the inadequate tools available to verify whether or not surveillance programs are lawful.

Imposing accountability on agencies whose missions are devoted to secrecy is a tough nut to crack. Our history struggling with this challenge suggests that these proposed reforms are good preliminary actions. But watching the watchers will continue to be an omnipresent duty.

]]>
https://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/feed/ 1 75085
A Nonpartisan Policy Vision for the Internet of Things https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/ https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/#comments Thu, 11 Dec 2014 20:07:11 +0000 http://techliberation.com/?p=75076

What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.

But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.

Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.

Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair.

Sen. Deb Fischer

In her opening remarks at the CDI event last week, Sen. Deb Fischer explained how “the Internet of Things can be a game changer for the U.S. economy and for the American consumer.” “It gives people more information and better tools to analyze data to make more informed choices,” she noted.

After outlining some of the potential benefits associated with the Internet of Things, Sen. Fischer continued on to explain why it is essential we get public policy incentives right first if we hope to unlock the full potential of these new technologies. Specifically, she argued that:

In order for Americans to receive the maximum benefits from increased connectivity, there are two things the government must avoid. First, policymakers can’t bury their heads in the sand and pretend this technological revolution isn’t happening only to wake up years down the road and try to micromanage a fast-changing, dynamic industry. Second, the federal government must also avoid regulation just for the sake of regulation. We need thoughtful, pragmatic responses and narrow solutions to any policy issues that arise. For too long, the only “strategy” in Washington policy-making has been to react to crisis after crisis. We should dive into what this means for U.S. global competitiveness, consumer welfare, and economic opportunity before the public policy challenges overwhelm us, before legislative and executive branches of government – or foreign governments – react without all the facts.

Fischer concluded by noting that, “it’s entirely appropriate for the U.S. government to think about how to modernize its regulatory frameworks, consolidate, renovate, and overhaul obsolete rules. We’re destined to lose to the Chinese or others if the Internet of Things is governed in the United States by rules that pre-date the VCR.”

Sen. Kelly Ayotte

Like Sen. Fischer, Ayotte similarly stressed the many economic opportunities associated with IoT technologies for both consumers and producers alike. [Note: Sen. Ayotte did not publish her remarks on her website, but you can watch her speech from the CDI event beginning around the 17-minute mark of the event video.]

Ayotte also noted that IoT is going to be a major topic for the Senate Commerce Committee and that there will be an upcoming hearing on the issue. She said that the role of the Committee will be to ensure that the various agencies looking into IoT issues are not issuing “conflicting regulatory directives” and “that what is being done makes sense and allows for future innovation that we can’t even anticipate right now.” Among the agencies she cited that are currently looking into IoT issues: FTC (privacy & security), FDA (medical device apps), FCC (wireless issues), FAA (commercial drones), NHTSA (intelligent vehicle technology), NTIA (multistakeholder privacy reviews), as well as state lawmakers and regulatory agencies.

Sen. Ayotte then explained what sort of policy framework America needed to adopt to ensure that the full potential of the Internet of Things could be realized. She framed the choice lawmakers are confronted with as follows:

we as policymakers we can either create an environment that allows that to continue to grow, or one that thwarts that. To stay on the cutting edge, we need to make sure that our regulatory environment is conducive to fostering innovation.” […] “we’re living in the Dark Ages in the ways the some of the regulations have been framed. Companies must be properly incentivized to invest in the future, and government shouldn’t be a deterrent to innovation and job-creation.

Ayotte also stressed that “technology continues to evolve so rapidly there is no one-size-fits-all regulatory approach” that can work for a dynamic environment like this. “If legislation drives technology, the technology will be outdated almost instantly,” and “that is why humility is so important,” she concluded.

The better approach, she argued was to let technology evolve freely in a “permissionless” fashion and then see what problems developed and then address them accordingly. “[A] top-down, preemptive approach is never the best policy” and will only serve to stifle innovation, she argued. “If all regulators looked with some humility at how technology is used and whether we need to regulate or not to regulate, I think innovation would stand to benefit.”

FTC Commissioner Maureen K. Ohlhausen

Fischer and Ayotte’s remarks reflect a vision for the Internet of Things that FTC Commissioner Maureen K. Ohlhausen has articulated in recent months. In fact, Sen. Ayotte specifically cited Ohlhausen in her remarks.

Ohlhausen has actually delivered several excellent speeches on these issues and has become one of the leading public policy thought leaders on the Internet of Things in the United States today. One of her first major speeches on these issues was her October 2013 address entitled, “The Internet of Things and the FTC: Does Innovation Require Intervention?” In that speech, Ohlhausen noted that, “The success of the Internet has in large part been driven by the freedom to experiment with different business models, the best of which have survived and thrived, even in the face of initial unfamiliarity and unease about the impact on consumers and competitors.”

She also issued a wise word of caution to her fellow regulators:

It is . . . vital that government officials, like myself, approach new technologies with a dose of regulatory humility, by working hard to educate ourselves and others about the innovation, understand its effects on consumers and the marketplace, identify benefits and likely harms, and, if harms do arise, consider whether existing laws and regulations are sufficient to address them, before assuming that new rules are required.

In this and other speeches, Ohlhausen has highlighted the various other remedies that already exist when things do go wrong, including FTC enforcement of “unfair and deceptive practices,” common law solutions (torts and class actions), private self-regulation and best practices, social pressure, and so on. (Note: Inspired by Ohlhausen’s approach, I devoted the final section of my big law review article on IoT issues to a deeper exploration of all those “bottom-up” solutions to privacy and security concerns surrounding the IoT and wearable tech.)

The Clinton Administration Vision

These three women have articulated what I regard as the ideal vision for fostering the growth of the Internet of Things. It should be noted, however, that their framework is really just an extension of the Clinton Administration’s outstanding vision for the Internet more generally.

In the 1997 Framework for Global Electronic Commerce, the Clinton Administration outlined its approach toward the Internet and the emerging digital economy. As I’ve noted many times before, the Framework was a succinct and bold market-oriented vision for cyberspace governance that recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. Specifically, it stated that “the private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.” “[G]overnments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”

Sen. Ayotte specifically cited those Clinton principles in her speech and said, “I think those words, given twenty years ago at the infancy of the Internet, are today even more relevant as we look at the challenges and the issues that we continue to face as regulators and policymakers.”

I completely agree. This is exactly the sort of vision that we need to keep innovation moving forward to benefit consumers and the economy, and this also illustrates how IoT policy can be a nonpartisan effort.

Why does this matter so much? As I noted in this recent essay, thanks to the Clinton Administration’s bold vision for the Internet:

This policy disposition resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. . . . The result of this freedom to experiment was an outpouring of innovation. America’s info-tech sectors thrived thanks to permissionless innovation, and they still do today. An annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing, software, and digital technology.

In other words, America got policy right before and we can get policy right again to ensure we are again global innovation leaders. Patience, flexibility, and forbearance are the key policy virtues that nurture an environment conducive to entrepreneurial creativity, economic progress, and greater consumer choice.

Other policymakers should endorse the vision originally sketched out by the Clinton Administration and now so eloquently embraced and extended by Sen. Fischer, Sen. Ayotte, and Commissioner Ohlhausen. This is the path forward if we hope to realize the full potential of the Internet of Things.

]]>
https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/feed/ 3 75076
Will Europe’s ‘Right to Be Forgotten’ Become an Unprecedented Global Censorship Regime? https://techliberation.com/2014/11/26/will-europes-right-to-be-forgotten-become-an-unprecedented-global-censorship-regime/ https://techliberation.com/2014/11/26/will-europes-right-to-be-forgotten-become-an-unprecedented-global-censorship-regime/#comments Wed, 26 Nov 2014 17:10:16 +0000 http://techliberation.com/?p=74995

Yesterday, the Article 29 Data Protection Working Party issued a press release providing more detailed guidance on how it would like to see Europe’s so-called “right to be forgotten” implemented and extended. The most important takeaway from the document was that, as Reuters reported, “European privacy regulators want Internet search engines such as Google and Microsoft’s Bing to scrub results globally.” Moreover, as The Register reported, the press release made it clear that “Europe’s data protection watchdogs say there’s no need for Google to notify webmasters when it de-lists a page under the so-called “right to be forgotten” ruling.” (Here’s excellent additional coverage from Bloomberg: Google.com Said to Face EU Right-to-Be-Forgotten Rules“). These actions make it clear that European privacy regulators hope to expand the horizons of the right to be forgotten in a very significant way.

The folks over at Marketplace radio asked me to spend a few minutes with them today discussing the downsides of this proposal. Here’s the quick summary of what I told them:

  • European privacy regulators are basically calling for an unprecedented global censorship regime that would impose their speech preferences and controls on the entire planet.
  • Europe has no right to tell the rest of the world how to structure their policies governing online freedom of speech, yet they are trying to strong-arm major American tech companies like Google to do so indirectly.
  • This is a grave threat to freedom of speech, freedom of expression, and Internet openness.
  • This move sends a horrible signal to oppressive regimes worldwide. It could lead to a race to the bottom with governments in other countries attempting to export their own speech preferences to the rest of the global. You can kiss global Internet freedom goodbye if that happens.
  • Relatedly, if European policymakers persist in these efforts, it could lead to future trade wars, even among friendly countries. Layers of speech controls like this could become formidable non-tariff barriers to trade and limit the growth of cross-border electronic commerce in the process.
  • This certainly doesn’t help competition. Ironically, this news comes during the same week that we have learned some European policymakers want to break up Google on antitrust grounds. But the more that European regulators push Google to enforce global speech controls like this, the more market power those policymakers give the company! Google is one of the few companies that might be able to hire enough lawyers and engineers to comply with such a regulatory regime. Few other tech companies – and certainly no small startups – could ever hope to comply with this ruling. In essence, it’s a new regulatory barrier to entry that diminishes digital entrepreneurialism.
  • Correspondingly, it’s another innovation-killer for Europe. If Europeans wonder why they fell so far behind in terms of Internet innovation over the past decade, they might consider looking at the wisdom of overly-restrictive data controls and speech regulations like this.
  • Privacy is certainly an important value, and more could be done to protect it. But what European regulators are proposing here is completely over the top. It is like trying to kill a fly with an elephant gun. There are more sensible ways to encourage privacy protection.
  • Instead of trying to export their speech controls and bully global innovators, European policymakers should just consider creating their own, government-funded search engines and then force their own citizens to use them. Let them try to create their own anti-free speech fortress and see how their citizens feel about living inside it.

Stay tuned, more to come on this front. In the meantime, here’s another response worth reading from of David Meyer of GigaOm.

]]>
https://techliberation.com/2014/11/26/will-europes-right-to-be-forgotten-become-an-unprecedented-global-censorship-regime/feed/ 1 74995
New Paper on Privacy & Security Implications of the Internet of Things & Wearable Technology https://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/ https://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/#comments Fri, 21 Nov 2014 15:23:31 +0000 http://techliberation.com/?p=74973

IoT paperThe Mercatus Center at George Mason University has just released my latest working paper, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation.” The “Internet of Things” (IoT) generally refers to “smart” devices that are connected to both the Internet and other devices. Wearable technologies are IoT devices that are worn somewhere on the body and which gather data about us for various purposes. These technologies promise to usher in the next wave of Internet-enabled services and data-driven innovation. Basically, the Internet will be “baked in” to almost everything that consumers own and come into contact with.

Some critics are worried about the privacy and security implications of the Internet of Things and wearable technology, however, and are proposing regulation to address these concerns. In my new 93-page article, I explain why preemptive, top-down regulation would derail the many life-enriching innovations that could come from these new IoT technologies. Building on a recent book of mine, I argue that “permissionless innovation,” which allows new technology to flourish and develop in a relatively unabated fashion, is the superior approach to the Internet of Things.

As I note in the paper and my earlier book, if we spend all our time living in fear of the worst-case scenarios — and basing public policies on them — then best-case scenarios can never come about. As the old saying goes: nothing ventured, nothing gained. Precautionary principle-based regulation paralyzes progress and must be avoided.  We instead need to find constructive, “bottom-up” solutions to the privacy and security risks accompanying these new IoT technologies instead of top-down controls that would limit the development of life-enriching IoT innovations.

The better alternative is to deal with concerns creatively as they develop, using a balanced, layered approach  involving many different solutions, including: educational efforts, technological empowerment tools, social norms, public and watchdog pressure, industry best practices and self-regulation, transparency, torts and products liability law, and targeted enforcement of existing legal standards as needed.

Generally speaking, patience, humility, and forbearance by policymakers is crucial to allowing greater innovation and consumer choice in this arena. Importantly, policymakers should not forget that societal and individual adaptation will play a role here, just as it has during so many other turbulent technological transformations.

This article can be downloaded on my Mercatus Center page, on SSRN, or at Research Gate. I am hoping to find a law or policy journal interested in publishing this paper soon. If you with a journal and are interested, please contact me. [UPDATE 12/3/14: This paper has been accepted for publication in the Richmond Journal of Law & Technology, Vol. 21, Issue 6 (2015).]

Finally, if you are interested in this topic, you might want to flip through these slides I prepared for a presentation on this topic that I made at the Federal Communications Commission in September:

Additional reading:
]]>
https://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/feed/ 5 74973