I’m excited to announce that the Minnesota Journal of Law, Science & Technology has just published the final version of my 78-page paper on, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” My thanks to the excellent team at the Journal, who made the final product a much better paper than the one I turned into them! I poured my heart and soul into this article and hope others find it useful. It’s the culmination of all my work on technopanics and threat inflation in information policy debates, much of which I originally developed here in various essays through the years. In coming weeks, I hope to elaborate on themes I develop in the paper in a couple of posts here.

The paper can be found on the Minn. J. L. Sci. & Tech. website or on SSRN. I’ve also embedded it below in a Scribd reader. Here’s the executive summary: Continue reading →

[Based on forthcoming article in the Minnesota Journal of Law, Science & Technology, Vol. 14 Issue 1, Winter 2013, http://mjlst.umn.edu]

I hope everyone caught these recent articles by two of my favorite journalists, Kashmir Hill (“Do We Overestimate The Internet’s Danger For Kids?”) and Larry Magid (“Putting Techno-Panics into Perspective.”) In these and other essays, Hill and Magid do a nice job discussing how society responds to new Internet risks while also explaining how those risks are often blown out of proportion to begin with.

Continue reading →

[UPDATE: 2/14/2013: As noted here, this paper was published by the Minnesota Journal of Law, Science & Technology in their Winter 2013 edition. Please refer to that post for more details and cite this final version of the paper going forward.]

I’m pleased to report that the Mercatus Center at George Mason University has just released my huge new white paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” I’ve been working on this paper for a long time and look forward to finding it a home in a law journal some time soon.  Here’s the summary of this 80-page paper:

Fear is an extremely powerful motivating force, especially in public policy debates where it is used in an attempt to sway opinion or bolster the case for action. Often, this action involves preemptive regulation based on false assumptions and evidence. Such fears are frequently on display in the Internet policy arena and take the form of full-blown “technopanic,” or real-world manifestations of this illogical fear. While it’s true that cyberspace has its fair share of troublemakers, there is no evidence that the Internet is leading to greater problems for society.

This paper considers the structure of fear appeal arguments in technology policy debates and then outlines how those arguments can be deconstructed and refuted in both cultural and economic contexts. Several examples of fear appeal arguments are offered with a particular focus on online child safety, digital privacy, and cybersecurity. The  various  factors  contributing  to  “fear  cycles”  in these policy areas are documented.

To the extent that these concerns are valid, they are best addressed by ongoing societal learning, experimentation, resiliency, and coping strategies rather than by regulation. If steps must be taken to address these concerns, education and empowerment-based solutions represent superior approaches to dealing with them compared to a precautionary principle approach, which would limit beneficial learning opportunities and retard technological progress.

The complete paper can be found on the Mercatus site here, on SSRN, or on Scribd.  I’ve also embedded it below in a Scribd reader. Continue reading →

Sen. Amy Klobuchar just released a letter to Facebook demanding the site require “a prominent safety button or link on the profile pages of users under the age of 18″—akin to the so-called “panic button” app launched earlier this week by the UK’s Child Exploitation & Online Protection Centre (CEOP). She doesn’t seem to realize that this app is available to all Facebook users, not just those in the UK. But her focus on empowerment tools and education is admirable, and it’s certainly a fair question to ask what sites like Facebook and MySpace are doing in these areas.

Unfortunately, Klobuchar’s letter also engages in blatant fear-mongering:

Recent research has shown that one in four American teenagers have been victims of a cyber predator.  And when teens experience abusive behavior online, only ten percent discuss it with their parents and even fewer report the misconduct to law enforcement.  It’s clear that teenagers need to know how to respond to a cyber attack and I believe we need stronger reporting mechanisms to keep our kids safe.

Klobuchar doesn’t actually cite anything, so it’s not clear what research she’s relying on. The 25% statistic is particularly incendiary, suggesting a nationwide cyber-predation crisis—perhaps leading the public to believe 8 or 9 million teens have been lured into sexual encounters offline. Perhaps the Senator considers every cyber-bully a cyber predator—which might get to the 25% number. But there are two serious problem with that moral equivalence.

First, to equate child predation with peer bullying is to engage in a dangerous game of defining deviancy down. Predation and bullying are radically different things. The first (sexual abuse) is a clear and heinous crime that can lead to long-term psychological damage. The second might be a crime in certain circumstances, but generally not.  And it is even less likely to be a crime when it occurs among young peers, which research shows constitutes the vast majority of cases. As Adam Thierer and I noted in our Congressional testimony last year, there are legitimate concerns about cyberbullying, but it’s something best dealt with by parents and schools rather than prosecutors (like Klobuchar in her pre-Senate career).

Second, a series of official taskforces have concluded that the cyberpredator technopanic is vastly overblown. Continue reading →

Congressmen working on national intelligence and homeland security either don’t know how to secure their own home Wi-Fi networks (it’s easy!) or don’t understand why they should bother. If you live outside the Beltway, you might think the response to this problem would be to redouble efforts to educate everyone about the importance of personal responsibility for data security, starting with Congressmen and their staffs. But of course those who live inside the Beltway know that the solution isn’t education or self-help but… you guessed it… to excoriate Google for spying on members of Congress (and bigger government, of course)!

Consumer Watchdog (which doesn’t actually claim any consumers as members) held a press conference this morning about their latest anti-Google stunt, announced last night on their “Inside Google” blog: CWD drove by five Congressmen’s houses in the DC area last week looking for unencrypted Wi-Fi networks. At Jane Harman’s (D-CA) home, they found two unencrypted networks named “Harmanmbr” and “harmantheater” that suggest the networks are Harman’s. So they sent Harman a letter demanding that she hold hearings on Google’s collection of Wi-Fi data, charging Google with “WiSpying.” This is a classic technopanic and the most craven, cynical kind of tech politics—dressed in the “consumer” mantle.

The Wi-Fi/Street View Controversy

Rewind to mid-May, when Google voluntarily disclosed that the cars it used to build a photographic library of what’s visible from public streets for Google Maps Street View had been unintentionally collecting small amounts of information from unencrypted Wi-Fi hotspots like Harman’s. These hotspots can be accessed by anyone who might drive or walk by with a Wi-Fi device—thus potentially exposing data sent over those networks between, say, a laptop in the kitchen, and the wireless router plugged into the cable modem.

Google’s Street View allows you to virtually walk down any public street and check out the neighborhood Continue reading →

My friend Anne Collier of Net Family News, one of America’s great sages on child safety issues, has produced a terrific list of reasons “Why Technopanics are Bad.”  Technopanics and moral panics are topics I’ve spent quite a bit of time commenting on here. (See 1, 2, 3, 4.) Anne is a rare voice of sanity and sensible advice when it comes to online child safety issues and I encourage you to read all her excellent work on the subject, including her book with Larry Magid, MySpace Unraveled: A Parent’s Guide to Teen Social Networking.  Anyway, here’s Anne’s list, and I encourage you to go over to her site and contribute your thoughts and suggestions about what else to add:

Technopanics are bad because they…

  • Cause fear, which interferes with parent-child communication, which in turn puts kids at greater risk.
  • Cause schools to fear and block digital media when they need to be teaching constructive use, employing social-technology devices and teaching new media literacy and citizenship classes throughout the curriculum.
  • Turn schools into barriers rather than contributors to young people’s constructive use.
  • Increase the irrelevancy of school to active young social-technology users via the sequestering or banning of educational technology and hamstring some of the most spirited and innovative educators.
  • Distract parents, educators, policymakers from real risks – including, for example, child-pornography laws that do not cover situations where minors can simultaneously be victim and “perpetrator” and, tragically, become registered sex offenders in cases where there no criminal intent (e.g., see this).
  • Reduce the competitiveness of US education among developed countries already effectively employing educational technology and social media in schools.
  • Reduce the competitiveness of US technology and media businesses practicing good corporate citizenship where youth online safety is concerned.
  • Lead to bad legislation, which aggravates above outcomes and takes the focus off areas where good laws on the books can be made relevant to current technology use.
  • Widen the participation gap for youth – technopanics are barriers for children and teens to full, constructive participation in participatory culture and democracy.

A few days ago, I posted an essay about the recent history of “moral panics,” or “technopanics,” as Alice Marwick refers to them in her brilliant new article about the recent panic over MySpace and social networking sites in general.

I got thinking about technopanics again today after reading the Washington Post’s front-page article, “When the Phone Goes With You, Everyone Else Can Tag Along.” In the piece, Post staff writer Ellen Nakashima discusses the rise of mobile geo-location technologies and services, which are becoming more prevalent as cell phones grow more sophisticated. These services are often referred to as “LBS,” which stands for “location-based services.”

Many of phones and service plans offered today include LBS technologies, which are very useful for parents like me who might want to monitor the movement of their children. Those same geo-location technologies can be used for other LBS purposes. Geo-location technologies are now being married to social networking utilities to create an entirely new service and industry: “social mapping.” Social mapping allows subscribers to find their friends on a digital map and then instantly network with them. Companies such as Loopt and Helio have already rolled out commercial social mapping services. Loopt has also partnered with major carriers to roll out its service nationwide, including the new iPhone 3G. It is likely that many other rivals will join these firms in coming months and years.

These new LBS services present exciting opportunities for users to network with friends and family, and it also open up a new world of commercial / advertising opportunities. Think of how stores could offer instantaneous coupons as you walk by their stores, for example. And very soon, you can imagine a world were many of our traditional social networking sites and services are linked into LBS tools in a seamless fashion. But as today’s Washington Post article notes, mobile geo-location and social mapping is also raising some privacy concerns:
Continue reading →

Time technopanic cover

Sean Garrett of the 463 Blog posted an excellent essay this week about the great moral panic of 1995, when Time magazine ran its famous cover “Cyberporn” story that included this unforgettable image. Unfortunately for Time, the article also included a great deal of erroneous information about online pornography that was pulled from a bogus study that found 83.5 percent of all online images were pornographic! The study was immediately debunked by scholars, but not before Congress rushed to judgment and passed the Communications Decency Act, which sought to ban all “indecent” online content. It was later struck down as unconstitutional, of course.

Anyway, Sean’s essay also brought to my attention this amazing new article by Alice Marwick, a PhD Candidate in the Department of Media, Culture, and Communication at New York University: “To Catch a Predator? The MySpace Moral Panic“. The topic of “moral panics” is something I have done quite a bit of work on, but Marwick’s paper is absolute must-reading on the topic, especially as it pertains to the recent moral panic of MySpace and social networking sites.
Continue reading →

Last week, it was my pleasure to speak at a Cato Institute event on “The End of Transit and the Beginning of the New Mobility: Policy Implications of Self-Driving Cars.” I followed Cato Institute Senior Fellow Randal O’Toole and Marc Scribner, a Research Fellow at the Competitive Enterprise Institute. They provided a broad and quite excellent overview of all the major issues at play in the debate over driverless cars. I highly recommend you read the excellent papers that Randal and Marc have published on these issues.

My role on the panel was to do a deeper dive into the privacy and security implications of not just the autonomous vehicles of our future, but also the intelligent vehicle technologies of the present. I discussed these issues in greater detail in my recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars,” which was co-authored with Ryan Hagemann. (That article will appear in a forthcoming edition of the Wake Forest Journal of Law & Policy.)  I’ve embedded the video of the event down below (my remarks begin at the 38:15 mark) as well as my speaking notes. Again, please consult the longer paper for details.


Continue reading →

If there are two general principles that unify my recent work on technology policy and innovation issues, they would be as follows. To the maximum extent possible:

  1. We should avoid preemptive and precautionary-based regulatory regimes for new innovation. Instead, our policy default should be innovation allowed (or “permissionless innovation”) and innovators should be considered “innocent until proven guilty” (unless, that is, a thorough benefit-cost analysis has been conducted that documents the clear need for immediate preemptive restraints).
  2. We should avoid rigid, “top-down” technology-specific or sector-specific regulatory regimes and/or regulatory agencies and instead opt for a broader array of more flexible, “bottom-up” solutions (education, empowerment, social norms, self-regulation, public pressure, etc.) as well as reliance on existing legal systems and standards (torts, product liability, contracts, property rights, etc.).

I was very interested, therefore, to come across two new essays that make opposing arguments and proposals. The first is this recent Slate oped by John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often.” The second is Ryan Calo’s new Brookings Institution white paper, “The Case for a Federal Robotics Commission.”

Weaver argues that new robot technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” In order to preemptively address concerns about new technologies such as driverless cars or commercial drones, “we need to legislate early and often,” Weaver says. Stated differently, Weaver is proposing “precautionary principle”-based regulation of these technologies. The precautionary principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

Calo argues that we need “the establishment of a new federal agency to deal with the novel experiences and harms robotics enables” since there exists “distinct but related challenges that would benefit from being examined and treated together.” These issues, he says, “require special expertise to understand and may require investment and coordination to thrive.

I’ll address both Weaver and Calo’s proposals in turn. Continue reading →