I’m excited to announce that the Minnesota Journal of Law, Science & Technology has just published the final version of my 78-page paper on, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” My thanks to the excellent team at the Journal, who made the final product a much better paper than the one I turned into them! I poured my heart and soul into this article and hope others find it useful. It’s the culmination of all my work on technopanics and threat inflation in information policy debates, much of which I originally developed here in various essays through the years. In coming weeks, I hope to elaborate on themes I develop in the paper in a couple of posts here.

The paper can be found on the Minn. J. L. Sci. & Tech. website or on SSRN. I’ve also embedded it below in a Scribd reader. Here’s the executive summary: Continue reading →

[Based on forthcoming article in the Minnesota Journal of Law, Science & Technology, Vol. 14 Issue 1, Winter 2013, http://mjlst.umn.edu]

I hope everyone caught these recent articles by two of my favorite journalists, Kashmir Hill (“Do We Overestimate The Internet’s Danger For Kids?”) and Larry Magid (“Putting Techno-Panics into Perspective.”) In these and other essays, Hill and Magid do a nice job discussing how society responds to new Internet risks while also explaining how those risks are often blown out of proportion to begin with.

Continue reading →

[UPDATE: 2/14/2013: As noted here, this paper was published by the Minnesota Journal of Law, Science & Technology in their Winter 2013 edition. Please refer to that post for more details and cite this final version of the paper going forward.]

I’m pleased to report that the Mercatus Center at George Mason University has just released my huge new white paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” I’ve been working on this paper for a long time and look forward to finding it a home in a law journal some time soon.  Here’s the summary of this 80-page paper:

Fear is an extremely powerful motivating force, especially in public policy debates where it is used in an attempt to sway opinion or bolster the case for action. Often, this action involves preemptive regulation based on false assumptions and evidence. Such fears are frequently on display in the Internet policy arena and take the form of full-blown “technopanic,” or real-world manifestations of this illogical fear. While it’s true that cyberspace has its fair share of troublemakers, there is no evidence that the Internet is leading to greater problems for society.

This paper considers the structure of fear appeal arguments in technology policy debates and then outlines how those arguments can be deconstructed and refuted in both cultural and economic contexts. Several examples of fear appeal arguments are offered with a particular focus on online child safety, digital privacy, and cybersecurity. The  various  factors  contributing  to  “fear  cycles”  in these policy areas are documented.

To the extent that these concerns are valid, they are best addressed by ongoing societal learning, experimentation, resiliency, and coping strategies rather than by regulation. If steps must be taken to address these concerns, education and empowerment-based solutions represent superior approaches to dealing with them compared to a precautionary principle approach, which would limit beneficial learning opportunities and retard technological progress.

The complete paper can be found on the Mercatus site here, on SSRN, or on Scribd.  I’ve also embedded it below in a Scribd reader. Continue reading →

Sen. Amy Klobuchar just released a letter to Facebook demanding the site require “a prominent safety button or link on the profile pages of users under the age of 18″—akin to the so-called “panic button” app launched earlier this week by the UK’s Child Exploitation & Online Protection Centre (CEOP). She doesn’t seem to realize that this app is available to all Facebook users, not just those in the UK. But her focus on empowerment tools and education is admirable, and it’s certainly a fair question to ask what sites like Facebook and MySpace are doing in these areas.

Unfortunately, Klobuchar’s letter also engages in blatant fear-mongering:

Recent research has shown that one in four American teenagers have been victims of a cyber predator.  And when teens experience abusive behavior online, only ten percent discuss it with their parents and even fewer report the misconduct to law enforcement.  It’s clear that teenagers need to know how to respond to a cyber attack and I believe we need stronger reporting mechanisms to keep our kids safe.

Klobuchar doesn’t actually cite anything, so it’s not clear what research she’s relying on. The 25% statistic is particularly incendiary, suggesting a nationwide cyber-predation crisis—perhaps leading the public to believe 8 or 9 million teens have been lured into sexual encounters offline. Perhaps the Senator considers every cyber-bully a cyber predator—which might get to the 25% number. But there are two serious problem with that moral equivalence.

First, to equate child predation with peer bullying is to engage in a dangerous game of defining deviancy down. Predation and bullying are radically different things. The first (sexual abuse) is a clear and heinous crime that can lead to long-term psychological damage. The second might be a crime in certain circumstances, but generally not.  And it is even less likely to be a crime when it occurs among young peers, which research shows constitutes the vast majority of cases. As Adam Thierer and I noted in our Congressional testimony last year, there are legitimate concerns about cyberbullying, but it’s something best dealt with by parents and schools rather than prosecutors (like Klobuchar in her pre-Senate career).

Second, a series of official taskforces have concluded that the cyberpredator technopanic is vastly overblown. Continue reading →

Congressmen working on national intelligence and homeland security either don’t know how to secure their own home Wi-Fi networks (it’s easy!) or don’t understand why they should bother. If you live outside the Beltway, you might think the response to this problem would be to redouble efforts to educate everyone about the importance of personal responsibility for data security, starting with Congressmen and their staffs. But of course those who live inside the Beltway know that the solution isn’t education or self-help but… you guessed it… to excoriate Google for spying on members of Congress (and bigger government, of course)!

Consumer Watchdog (which doesn’t actually claim any consumers as members) held a press conference this morning about their latest anti-Google stunt, announced last night on their “Inside Google” blog: CWD drove by five Congressmen’s houses in the DC area last week looking for unencrypted Wi-Fi networks. At Jane Harman’s (D-CA) home, they found two unencrypted networks named “Harmanmbr” and “harmantheater” that suggest the networks are Harman’s. So they sent Harman a letter demanding that she hold hearings on Google’s collection of Wi-Fi data, charging Google with “WiSpying.” This is a classic technopanic and the most craven, cynical kind of tech politics—dressed in the “consumer” mantle.

The Wi-Fi/Street View Controversy

Rewind to mid-May, when Google voluntarily disclosed that the cars it used to build a photographic library of what’s visible from public streets for Google Maps Street View had been unintentionally collecting small amounts of information from unencrypted Wi-Fi hotspots like Harman’s. These hotspots can be accessed by anyone who might drive or walk by with a Wi-Fi device—thus potentially exposing data sent over those networks between, say, a laptop in the kitchen, and the wireless router plugged into the cable modem.

Google’s Street View allows you to virtually walk down any public street and check out the neighborhood Continue reading →

My friend Anne Collier of Net Family News, one of America’s great sages on child safety issues, has produced a terrific list of reasons “Why Technopanics are Bad.”  Technopanics and moral panics are topics I’ve spent quite a bit of time commenting on here. (See 1, 2, 3, 4.) Anne is a rare voice of sanity and sensible advice when it comes to online child safety issues and I encourage you to read all her excellent work on the subject, including her book with Larry Magid, MySpace Unraveled: A Parent’s Guide to Teen Social Networking.  Anyway, here’s Anne’s list, and I encourage you to go over to her site and contribute your thoughts and suggestions about what else to add:

Technopanics are bad because they…

  • Cause fear, which interferes with parent-child communication, which in turn puts kids at greater risk.
  • Cause schools to fear and block digital media when they need to be teaching constructive use, employing social-technology devices and teaching new media literacy and citizenship classes throughout the curriculum.
  • Turn schools into barriers rather than contributors to young people’s constructive use.
  • Increase the irrelevancy of school to active young social-technology users via the sequestering or banning of educational technology and hamstring some of the most spirited and innovative educators.
  • Distract parents, educators, policymakers from real risks – including, for example, child-pornography laws that do not cover situations where minors can simultaneously be victim and “perpetrator” and, tragically, become registered sex offenders in cases where there no criminal intent (e.g., see this).
  • Reduce the competitiveness of US education among developed countries already effectively employing educational technology and social media in schools.
  • Reduce the competitiveness of US technology and media businesses practicing good corporate citizenship where youth online safety is concerned.
  • Lead to bad legislation, which aggravates above outcomes and takes the focus off areas where good laws on the books can be made relevant to current technology use.
  • Widen the participation gap for youth – technopanics are barriers for children and teens to full, constructive participation in participatory culture and democracy.

A few days ago, I posted an essay about the recent history of “moral panics,” or “technopanics,” as Alice Marwick refers to them in her brilliant new article about the recent panic over MySpace and social networking sites in general.

I got thinking about technopanics again today after reading the Washington Post’s front-page article, “When the Phone Goes With You, Everyone Else Can Tag Along.” In the piece, Post staff writer Ellen Nakashima discusses the rise of mobile geo-location technologies and services, which are becoming more prevalent as cell phones grow more sophisticated. These services are often referred to as “LBS,” which stands for “location-based services.”

Many of phones and service plans offered today include LBS technologies, which are very useful for parents like me who might want to monitor the movement of their children. Those same geo-location technologies can be used for other LBS purposes. Geo-location technologies are now being married to social networking utilities to create an entirely new service and industry: “social mapping.” Social mapping allows subscribers to find their friends on a digital map and then instantly network with them. Companies such as Loopt and Helio have already rolled out commercial social mapping services. Loopt has also partnered with major carriers to roll out its service nationwide, including the new iPhone 3G. It is likely that many other rivals will join these firms in coming months and years.

These new LBS services present exciting opportunities for users to network with friends and family, and it also open up a new world of commercial / advertising opportunities. Think of how stores could offer instantaneous coupons as you walk by their stores, for example. And very soon, you can imagine a world were many of our traditional social networking sites and services are linked into LBS tools in a seamless fashion. But as today’s Washington Post article notes, mobile geo-location and social mapping is also raising some privacy concerns:
Continue reading →

Time technopanic cover

Sean Garrett of the 463 Blog posted an excellent essay this week about the great moral panic of 1995, when Time magazine ran its famous cover “Cyberporn” story that included this unforgettable image. Unfortunately for Time, the article also included a great deal of erroneous information about online pornography that was pulled from a bogus study that found 83.5 percent of all online images were pornographic! The study was immediately debunked by scholars, but not before Congress rushed to judgment and passed the Communications Decency Act, which sought to ban all “indecent” online content. It was later struck down as unconstitutional, of course.

Anyway, Sean’s essay also brought to my attention this amazing new article by Alice Marwick, a PhD Candidate in the Department of Media, Culture, and Communication at New York University: “To Catch a Predator? The MySpace Moral Panic“. The topic of “moral panics” is something I have done quite a bit of work on, but Marwick’s paper is absolute must-reading on the topic, especially as it pertains to the recent moral panic of MySpace and social networking sites.
Continue reading →

This article originally appeared at techfreedom.org.

Twenty years ago today, President Clinton signed the Telecommunications Act of 1996. John Podesta, his chief of staff immediately saw the problem: “Aside from hooking up schools and libraries, and with the rather major exception of censorship, Congress simply legislated as if the Net were not there.”

Here’s our take on what Congress got right (some key things), what it got wrong (most things), and what an update to the key laws that regulate the Internet should look like. The short version is:

  • End FCC censorship of “indecency”
  • Focus on promoting competition
  • Focus regulation on consumers rather than arbitrary technological silos or political whim
  • Get the FCC out of the business of helping government surveillance

Trying, and Failing, to Censor the Net

Good: The Act is most famous for Section 230, which made Facebook and Twitter possible. Without 230, such platforms would have been held liable for the speech of their users — just as newspapers are liable for letters to the editor. Trying to screen user content would simply have been impossible. Sharing user-generated content (UGC) on sites like YouTube and social networks would’ve been tightly controlled or simply might never have taken off. Without Section 230, we might all still be locked in to AOL!

Bad: Still, the Act was very much driven by a technopanic over “protecting the children.”

  • Internet Censorship. 230 was married to a draconian crackdown on Internet indecency. Aimed at keeping pornography away from minors, the rest of the Communications Decency Act — rolled into the Telecom Act — would have required age verification of all users, not just on porn sites, but probably any UGC site, too. Fortunately, the Supreme Court struck this down as a ban on anonymous speech online.
  • Broadcast Censorship. Unfortunately, the FCC is still in the censorship business for traditional broadcasting. The 1996 Act did nothing to check the agency’s broad powers to decide how long a glimpse of a butt or a nipple is too much for Americans’ sensitive eyes.

Unleashing Competition—Slowly

Good: Congress unleashed over $1.3 trillion in private broadband investment, pitting telephone companies and cable companies against each other in a race to serve consumers — for voice, video andbroadband service.

  • Legalizing Telco Video. In 1984, Congress had (mostly) prohibited telcos from providing video service — largely on the assumption that it was a monopoly. Congress reversed that, which eventually meant telcos had the incentive to invest in networks that could carry video — and super-fast broadband.
  • Breaking Local Monopolies. Congress also barred localities from blocking new entry by denying a video “franchise.”
  • Encouraging Cable Investment. The 1992 Cable Act had briefly imposed price regulation on basic cable packages. This proved so disastrous that the Democratic FCC retreated — but only after killing a cycle of investment and upgrades, delaying cable modem service by years. In 1996, Congress finally put a stake through the heart of such rate regulation, removing investment-killing uncertainty.

Bad: While the Act laid the foundations for what became facilities-based network competition, its immediate focus was pathetically short-sighted: trying to engineer artificial competition for telephone service.

  • Unbundling Mandates. The Act created an elaborate set of requirements that telephone companies “unbundle” parts of their networks so that resellers could use them, at sweetheart prices, to provide “competitive” service. The FCC then spent the next nine years fighting over how to set these rates.
  • Failure of Vision. Meanwhile, competing networks provided fierce competition: cable providers gained over half the telephony market with a VoIP service, and 47% of customers have simply cut the cord — switching entirely to wireless. Though the FCC refuses to recognize it, broadband is becoming more competitive, too: 2014 saw telcos invest in massive upgrades, bringing 25–75 Mbps speeds to more than half the country by pushing fiber closer to homes. The cable-telco horse race is fiercer than ever — and Google Fiber has expanded its deployment of a third pipe to the home, while cable companies are upgrading to provide gigabit-plus speeds and wireless broadband has become a real alternative for rural America.
  • Delaying Fiber. The greatest cost of the FCC’s unbundling shenanigans was delaying the major investments telcos needed to keep up with cable. Not until 2003 did the FCC make clear that it would not impose unbundling mandates on fiber — which pushed Verizon to begin planning its FiOS fiber-to-the-home network. The other crucial step came in 2006, when the Commission finally clamped down on localities that demanded lavish ransoms for allowing the deployment of new networks, which stifled competition.

Regulation

Good: With the notable exception of unbundling mandates, the Act was broadly deregulatory.

  • General thrust. Congress could hardly have been more clear: “It is the policy of the United States… to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”
  • Ongoing Review & Deregulation. Congress gave the FCC broad discretion to ratchet down regulation to promote competition.

Bad: The Clinton Administration realized that technological change was rapidly erasing the lines separating different markets, and had proposed a more technology-neutral approach in 1993. But Congress rejected that approach. The Act continued to regulate by dividing technologies into silos: broadcasting (Title III), telephone (Title II) and cable (Title VI). Title I became a catch-all for everything else. Crucially, Congress didn’t draw a clear line between Title I and Title II, setting in motion a high-stakes fight that continues today.

  • Away from Regulatory Silos. Bill Kennard, Clinton’s FCC Chairman, quickly saw just how obsolete the Act was. His 1999 Strategic Plan remains a roadmap for FCC reform.
  • Away from Title II. Kennard also indicated that he favored placing all broadband in Title I — mainly because he understood that Title II was designed for a monopoly and would tend to perpetuate it. Vibrant competition between telcos and cable companies could happen only under Title I. But it was the Bush FCC that made this official, classifying cable modem as Title I in 2002 and telco DSL in 2005.
  • Net Neutrality Confusion. The FCC spent a decade trying to figure out how to regulate net neutrality, losing in court twice, distracting the agency from higher priorities — like promoting broadband deployment and adoption — and making telecom policy, once an area of non-partisan pragmatism, a fiercely partisan ideological cesspool.
  • Back to Title II. In 2015, the FCC reclassified broadband under Title II — not because it didn’t have other legal options for regulating net neutrality, but because President Obama said it should. He made the issue part of his re-assertion of authority after Democrats lost the 2014 midterm elections. Net neutrality and Title II became synonymous, even though they have little to do with each other. Now, the FCC’s back in court for the third time.
  • Inventing a New Act. Unless the courts stop it, the FCC will exploit the ambiguities of the ‘96 Act to essentially write a new Act out of thin air: regulating way up with Title II, using its forbearance powers to temporarily suspend politically toxic parts of the Act (like unbundling), and inventing wholly new rules that give the FCC maximum discretion—while claiming the power to do anything that somehow promotes broadband. The FCC calls this all “modernization” but it’s really a staggering power grab that allows the FCC to control the Internet in the murkiest way possible.
  • Bottom line: The 1996 Act gives the FCC broad authority to regulate in the “public interest,” without effectively requiring the FCC to gauge the competitive effects of what it does. The agency’s stuck in a kind of Groundhog Day of over-regulation, constantly over-doing it without ever learning from its mistakes.

Time for a #CommActUpdate

Censorship. The FCC continues to censor dirty words and even brief glimpses of skin on television because of a 1978 decision that assumes parents are helpless to control their kids’ media consumption. Today, parental control tools make this assumption obsolete: parents can easily block programming marked as inappropriate. Congress should require the FCC to focus on outright obscenity — and let parents choose for themselves.

Competition. If the 1996 Act served to allow two competing networks, a rewrite should focus on driving even fiercer cable-telco competition, encouraging Google Fiber and others to build a third pipe to the home, and making wireless an even stronger competitor.

  • Title II. If you wanted to protect cable companies from competition, you couldn’t find a better way to do it than Title II. Closing that Pandora’s Box forever will encourage companies like Google Fiber to enter the market. But Congress needs to finish what the 1996 Act started: it’s not enough to stop localities from denying franchises video service (and thus broadband, too).
  • Local Barriers. Congress should crack down on the moronic local practices that have made deployment of new networks prohibitive — learning from the success of Google Fiber cities, which have cut red tape, lowered fees and generally gotten out of the way. Pending bipartisan legislationwould make these changes for federal assets, and require federal highway projects to include Dig Once conduits to make fiber deployment easier. That’s particularly helpful for rural areas, which the FCC has ignored, but making deployment easier inside cities will require making municipal rights of way easier to use. Instead of rushing to build their own broadband networks, localities should have to first at least try to stimulate private deployment.

Regulation. Technological silos made little sense in 1993. Today, they’re completely obsolete.

  • Unchecked Discretion. The FCC’s right about one thing: rigid rules don’t make sense either, given how fast technology is changing. But giving the FCC sweeping discretion is even more dangerous: it makes regulating the Internet inherently political, subject to presidential whim and highly sensitive to elections.
  • The Fix. There’s a simple solution: write clear standards that let the FCC work across all communications technologies, but that require the FCC to prove that its tinkering actually makes consumers better off. As long as the FCC can do whatever it claims is in the “public interest,” the Internet will never be safe.
  • Rethinking the FCC. Indeed, Congress should seriously consider breaking up the FCC, transferring its consumer protection functions to the Federal Trade Commission and its spectrum functions to the Commerce Department.

Encryption. Since 1994, the FCC has had the power to require “telecommunications services” to be wiretap-ready — and the discretion to decide how to interpret that term. Today, the FBI is pushing for a ban on end-to-end encryption — so law enforcement can get backdoor access into services like Snapchat. Unfortunately, foreign governments and malicious hackers could use those backdoors, too. Congress is stalling, but the FCC could give law enforcement exactly what it wants — using the same legal arguments it used to reclassify mobile broadband under Title II. Law enforcement is probably already using this possibility to pressure Internet companies against adopting secure encryption. Congress should stop the FCC from requiring back doors.

Smart Device Paranoia

by on October 5, 2015 · 0 comments

The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future, the ACLU tech blog.

Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:

When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.

While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.

Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.

More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples. In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.

Continue reading →