First Amendment & Free Speech

In my most recent weekly Forbes column, “Common Sense About Kids, Facebook & The Net,” I consider the wisdom of an online petition that the child safety advocacy group Common Sense Media is pushing, which demands that Facebook give up any thought of letting kids under the age of 13 on the site. “There is absolutely no proof of any meaningful social or educational value of Facebook for children under 13,” their petition insists. “Indeed, there are very legitimate concerns about privacy, as well as its impact on children’s social, emotional, and cognitive development.” Common Sense Media doesn’t offer any evidence to substantiate those claims, but one can sympathize with some of the general worries. Nonetheless, as I argue in my essay:

Common Sense Media’s approach to the issue is short-sighted. Calling for a zero-tolerance, prohibitionist policy toward kids on Facebook (and interactive media more generally) is tantamount to a bury-your-head-in-sand approach to child safety. Again, younger kids are increasingly online, often because their parents allow or even encourage it. To make sure they get online safely and remain safe, we’ll need a different approach than Common Sense Media’s unworkable “just-say-no” model.

Think about it this way: Would it make sense to start a petition demanding that kids be kept out of town squares, public parks, or shopping malls? Most of us would find the suggestion ludicrous. Continue reading →

Via Twitter, Andrew Grossman brought to my attention this terrifically interesting interview with a Kuwaiti censor that appeared in the Kuwait Times (“Read No Evil – Senior Censor Defends Work, Denies Playing Big Brother“). In the interview, the censor, Dalal Al-Mutairi, head of the Foreign Books Department at the Ministry of Information, speaks in a remarkably candid fashion and casual tone about the job she and other Kuwaiti censors do every day. My favorite line comes when Dalal tells the reporter how working as a censor is so very interesting and enlightening: “I like this work. It gives us experience, information and we always learn something new.”  I bet!  But what a shame that others in her society will be denied the same pleasure of always learning something new. Of course, like all censors, Dalal probably believes that she is doing a great public service by screening all culture and content to make sure the masses do not consume offensive, objectionable, or harmful content.

But here’s where the reporter missed a golden opportunity to ask Dalal the one question that you must always ask a censor if you get to meet one: If the content you are censoring is so destructive to the human soul or psyche, how then is it that you are such a well-adjusted person?  And Dalal certainly seems like a well-adjusted person. Although the reporter doesn’t tell us much about her personal life or circumstances, Dalal volunteers this much about herself and her fellow censors: “Many people consider the censor to be a fanatic and uneducated person, but this isn’t true. We are the most literate people as we have read much, almost every day. We receive a lot of information from different fields. We read books for children, religious books, political, philosophical, scientific ones and many others.” Well of course you do… because you are lucky enough to have access to all that content! But you are also taking steps to make sure the rest of your society doesn’t consume it on the theory that it would harm them or harm public morals in some fashion.  But, again, how is it that you have not been utterly corrupted by it all, Ms. Dalal? After all, you get to consume all that impure, sacrilegious, and salacious stuff! Shouldn’t you be some kind of monster by now?

How can this inconsistency be explained? The answer to this riddle can be found in the “Third-Person Effect Hypothesis.” Continue reading →

Andrew Orlowski of The Register (U.K.) recently posted a very interesting essay making the case for treating online copyright and privacy as essentially the same problem in need of the same solution: increased property rights. In his essay (“‘Don’t break the internet’: How an idiot’s slogan stole your privacy“), he argues that, “The absence of permissions on our personal data and the absence of permissions on digital copyright objects are two sides of the same coin. Economically and legally they’re an absence of property rights – and an insistence on preserving the internet as a childlike, utopian world, where nobody owns anything, or ever turns a request down. But as we’ve seen, you can build things like libraries with permissions too – and create new markets.” He argues that “no matter what law you pass, it won’t work unless there’s ownership attached to data, and you, as the individual, are the ultimate owner. From the basis of ownership, we can then agree what kind of rights are associated with the data – eg, the right to exclude people from it, the right to sell it or exchange it – and then build a permission-based world on top of that.”

And so, he concludes, we should set aside concerns about Internet regulation and information control and get down to the business of engineering solutions that would help us property-tize both intangible creations and intangible facts about ourselves to better shield our intellectual creations and our privacy in the information age. He builds on the thoughts of Mark Bide, a tech consultant:

For Bide, privacy and content markets are just a technical challenges that need to be addressed intelligently.”You can take two views,” he told me. “One is that every piece of information flowing around a network is a good thing, and we should know everything about everybody, and have no constraints on access to it all.” People who believe this, he added, tend to be inflexible – there is no half-way house. “The alternative view is that we can take the technology to make privacy and intellectual property work on the network. The function of copyright is to allow creators and people who invest in creation to define how it can be used. That’s the purpose of it. “So which way do we want to do it?” he asks. “Do we want to throw up our hands and do nothing? The workings of a civilised society need both privacy and creator’s rights.”  But this a new way of thinking about things: it will be met with cognitive dissonance. Copyright activists who fight property rights on the internet and have never seen a copyright law they like, generally do like their privacy. They want to preserve it, and will support laws that do. But to succeed, they’ll need to argue for stronger property rights. They have yet to realise that their opponents in the copyright wars have been arguing for those too, for years. Both sides of the copyright “fight” actually need the same thing. This is odd, I said to Bide. How can he account for this irony? “Ah,” says Bide. “Privacy and copyright are two things nobody cares about unless it’s their own privacy, and their own copyright.”

These are important insights that get at a fundamental truth that all too many people ignore today: At root, most information control efforts are related and solutions for one problem can often be used to address others. But there’s another insight that Orlowski ignores: Whether we are discussing copyright, privacy, online speech and child safety, or cybersecurity, all these efforts to control the free flow of digitized bits over decentralized global networks will be increasingly complex, costly, and riddled with myriad unintended consequences. Importantly, that is true whether you seek to control information flows through top-down administrative regulation or by assigning and enforcing property rights in intellectual creations or private information.

Let me elaborate a bit (and I apologize for the rambling mess of rant that follows).

Continue reading →

Yesterday on TechCrunch, Josh Constine posted an interesting essay about how some in the press were “Selling Digital Fear” on the privacy front. His specific target was The Wall Street Journal, which has been running an ongoing investigation of online privacy issues with a particular focus on online apps. Much of the reporting in their “What They Know” series has been valuable in that it has helped shine light on some data collection practices and privacy concerns that deserve more scrutiny. But as Constine notes, sometimes the articles in the WSJ series lack sufficient context, fail to discuss trade-offs, or do not identify any concrete harm or risk to users. In other words, some of it is just simple fear-mongering. Constine argues:

Reality has yet to stop media outlets from yelling about privacy, and because the WSJ writers were on assignment, they wrote the “Selling You On Facebook” hit piece despite thin findings. These kind of articles can make mainstream users so worried about the worst-case scenario of what could happen to their data, they don’t see the value they get in exchange for it. “Selling You On Facebook” does bring up the important topic of how apps can utilize personal data granted to them by their users, but it overstates the risks. Yes, the business models of Facebook and the apps on its platform depend on your personal information, but so do the services they provide. That means each user needs to decide what information to grant to who, and Facebook has spent years making the terms of this value exchange as clear as possible.

“While sensationalizing the dangers of online privacy sure drives page views and ad revenue,” Constine also noted, “it also impedes innovation and harms the business of honest software developers.” These trade-offs are important because, to the extent policymakers get more interested in pursing privacy regulations based on these fears, they could force higher prices or less innovation upon us with very little benefit in exchange.

Of course, the press generating hypothetical fears or greatly inflating dangers is nothing new. We have seen it happen many times in the past and it can be seen at work in many other fields today (online child safety is a good example). In my recent 80-page paper on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” I discussed how and why the press and other players inflate threats and sell fear. Here’s a passage from my paper: Continue reading →

I want to highly recommend everyone watch this interesting new talk by danah boyd on “Culture of Fear + Attention Economy = ?!?!” In her talk, danah discusses “how fear gets people into a frenzy” or panic about new technologies and new forms of culture. “The culture of fear is the idea that fear can be employed by marketers, politicians, the media, and the public to really regulate the public… such that they can be controlled,” she argues. “Fear isn’t simply the product of natural forces. It can systematically be generated to entice, motivate, or suppress. It can be leveraged as a political tool and those in power have long used fear for precisely these goals.”  I discuss many of these issues in my new 80-page white paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.

Webstock ’12: danah boyd – Culture of Fear + Attention Economy = ?!?! from Webstock on Vimeo.

danah points out that new media is often leveraged to generate fear and so we should not be surprised when the Internet and digital technologies are used in much the same way. She also correctly notes that our cluttered, cacophonous information age might also be causing an escalation of fear-based tactics. “The more there are stimuli competing for your attention, the more likely it is that fear is going to be the thing that will drive your attention” to the things that some want you to notice or worry about.

I spent some time in my technopanics paper discussing this point in Section III.C (“Bad News Sells: The Role of the Media, Advocates, and the Listener.”) Here’s the relevant passage: Continue reading →

The Mercatus Center at George Mason University has just released my new white paper, “The Perils of Classifying Social Media Platforms as Public Utilities.” [PDF] I first presented a draft of this paper last November at a Michigan State University conference on “The Governance of Social Media.” [Video of my panel here.]

In this paper, I note that to the extent public utility-style regulation has been debated within the Internet policy arena over the past decade, the focus has been almost entirely on the physical layer of the Internet. The question has been whether Internet service providers should be considered “essential facilities” or “natural monopolies” and regulated as public utilities. The debate over “net neutrality” regulation has been animated by such concerns.

While that debate still rages, the rhetoric of public utilities and essential facilities is increasingly creeping into policy discussions about other layers of the Internet, such as the search layer. More recently, there have been rumblings within academic and public policy circles regarding whether social media platforms, especially social networking sites, might also possess public utility characteristics. Presumably, such a classification would entail greater regulation of those sites’ structures and business practices.

Proponents of treating social media platforms as public utilities offer a variety of justifications for regulation. Amorphous “fairness” concerns animate many of these calls, but privacy and reputational concerns are also frequently mentioned as rationales for regulation. Proponents of regulation also sometimes invoke “social utility” or “social commons” arguments in defense of increased government oversight, even though these notions lack clear definition.

Social media platforms do not resemble traditional public utilities, however, and there are good reasons why policymakers should avoid a rush to regulate them as such. Continue reading →

Today, the FCC issued a Notice of Inquiry, responding to an emergency petition filed last August regarding temporary shutdown of mobile services by officers of the San Francisco Bay Area Rapid Transit (BART) district. The petition asked the FCC to issue a declaratory ruling that the shutdown violated the Communications Act. The following statement can be attributed to Larry Downes, Senior Adjunct Fellow at TechFreedom, and Berin Szoka, President of TechFreedom:

What BART did clearly violated the First Amendment, and needlessly put passengers at risk by cutting off emergency services just when they were needed most. But we need a court to say so, not the FCC.

The FCC has no authority here. The state did not order the shutdown of the network, nor does the state run the network. BART police simply turned off equipment it doesn’t own—a likely violation of its contractual obligations to the carriers. But BART did nothing that violated FCC rules governing network operators. To declare the local government an “agent” of the carriers would set an extremely dangerous precedent for an agency with a long track-record of regulatory creep.

There are other compelling reasons to use the courts and not regulators to enforce free speech rights. Regulatory agencies move far too slowly. Here, it took the FCC six months just to open an inquiry! Worse, today’s Notice of Inquiry will lead, if anything, to more muddled rulings and regulations. These may unintentionally give cover to local authorities trying to parse them for exceptions and exclusions, or at least the pretense of operating within FCC guidelines.

It would have been far better to make clear to BART, either through negotiations or the courts, that their actions were unconstitutional and dangerous. Long before today’s action, BART adopted new policies that better respect First Amendment rights and common sense. But now the regulatory wheels have creaked into motion. Who knows where they’ll take us, or when?

[UPDATE: 2/14/2013: As noted here, this paper was published by the Minnesota Journal of Law, Science & Technology in their Winter 2013 edition. Please refer to that post for more details and cite this final version of the paper going forward.]

I’m pleased to report that the Mercatus Center at George Mason University has just released my huge new white paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” I’ve been working on this paper for a long time and look forward to finding it a home in a law journal some time soon.  Here’s the summary of this 80-page paper:

Fear is an extremely powerful motivating force, especially in public policy debates where it is used in an attempt to sway opinion or bolster the case for action. Often, this action involves preemptive regulation based on false assumptions and evidence. Such fears are frequently on display in the Internet policy arena and take the form of full-blown “technopanic,” or real-world manifestations of this illogical fear. While it’s true that cyberspace has its fair share of troublemakers, there is no evidence that the Internet is leading to greater problems for society.

This paper considers the structure of fear appeal arguments in technology policy debates and then outlines how those arguments can be deconstructed and refuted in both cultural and economic contexts. Several examples of fear appeal arguments are offered with a particular focus on online child safety, digital privacy, and cybersecurity. The  various  factors  contributing  to  “fear  cycles”  in these policy areas are documented.

To the extent that these concerns are valid, they are best addressed by ongoing societal learning, experimentation, resiliency, and coping strategies rather than by regulation. If steps must be taken to address these concerns, education and empowerment-based solutions represent superior approaches to dealing with them compared to a precautionary principle approach, which would limit beneficial learning opportunities and retard technological progress.

The complete paper can be found on the Mercatus site here, on SSRN, or on Scribd.  I’ve also embedded it below in a Scribd reader. Continue reading →

The White House’s “Consumer Data Privacy in a Networked World” report outlines a revised framework for consumer privacy, proposes a “Consumer Privacy Bill of Rights,” and calls on Congress to pass new legislation to regulate online businesses. The following statement can be attributed to Berin Szoka, President of TechFreedom, and Larry Downes, TechFreedom Senior Adjunct Fellow:

This Report begins and ends as constitutional sleight-of-hand. President Obama starts by reminding us of the Fourth Amendment’s essential protection against “unlawful intrusion into our homes and our personal papers”—by government. But the Report recommends no reform whatsoever for outdated laws that have facilitated a dangerous expansion of electronic surveillance. That is the true threat to our privacy. The report dismisses it in a footnote.

Instead, the Report calls for extensive new regulation of Internet businesses to address little more than the growing pains of a vibrant emerging economy. “For businesses to succeed online,” President Obama asserts, “consumers must feel secure.”  Yet online businesses that rely on data to deliver innovative and generally free services are the one bright spot in a sour economy. Experience has shown consumers ultimately bear the costs of regulations imposed on emerging technologies, no matter how well-intentioned.

The report is a missed opportunity. The Administration should have called for increased protections against government’s privacy intrusions. Focusing on the real Bill of Rights would have respected not only the Fourth Amendment, but also the First Amendment. The Supreme Court made clear last year that the private sector’s use of data is protected speech—an issue also not addressed by this Report.

Szoka and Downes are available for comment at media@techfreedom.org.

Over at TIME.com I write that if you didn’t like SOPA because it threatened free speech, then you probably won’t like the new “Right to be Forgotten” proposed in the EU. Prof. Jane Yakowitz contributes some great insights to the piece. What I dislike most about the rule is that it subordinates expression to privacy:

>[T]he new law would flip the traditional understanding of privacy as an exception to free speech. What this means is that if we treat free expression as the more important value, then one has to prove a harmful violation of privacy before the speaker can be silenced. Under the proposed law, however, it’s the speaker who must show that his speech is a “legitimate” exception to a claim of privacy. That is, the burden of proof is switched so that speakers are the ones who would have to justify their speech.

Read the whole thing at TIME.com.