Intermediary Deputization & Section 230

There is a major controversy rocking the UK over the far-reaching press gag orders known as “super-injunctions,” especially because they’ve been brought to the fore by a sex scandal between famous footballer Ryan Giggs and reality TV star Imogen Thomas. (This blog post is now officially illegal in the UK.) In [my latest Techland post](, I explain the controversy and say that while the injunction is legally enforceable–Facebook has a London office with over 50 employees, and [today comes word]( that Twitter is starting up its UK operation–they are not practically enforceable because once out, the information cannot be controlled. I wrote:

>Controlling information is possible, but only at the margin and at great cost. As information technology advances, that margin at which information can be controlled gets thinner and thinner, and the costs of doing so become greater and greater. So given the apparent futility of keeping facts secret, you’d think officials would look to find better ways of confronting the new reality. That’s unfortunately not the case.

>“Why are we assuming that the world of communication, developing as rapidly as it is, can never be brought under control by other technological developments?” asked the head of the U.K.’s judiciary yesterday. “I am not giving up on the possibility that people who in effect peddle lies about others through modern technology may one day be brought under control.”

>And we should not forget to look in the mirror. While the U.S. has some of the world’s most extensive free speech and press liberties, it seems every week there is a new proposal to control what information can be published online.

One of my favorite topics lately has been the challenges faced by information control regimes. Jerry Brito and I are writing a big paper on this issue right now. Part of the story we tell is that the sheer scale / volume of modern information flows is becoming so overwhelming that it raises practical questions about just how effective any info control regime can be. [See our recent essays on the topic: 1, 23, 4, 5.]  As we continue our research, we’ve been attempting to unearth some good metrics / factoids to help tell this story.  It’s challenging because there aren’t many consistent data sets depicting online data growth over time and some of the best anecdotes from key digital companies are only released sporadically. Anyway, I’d love to hear from others about good metrics and data sets that we should be examining.  In the meantime, here are a few fun facts I’ve unearthed in my research so far. Please let me know if more recent data is available. [Note: Last updated 7/18/11]

  • Facebook: users submit around 650,000 comments on the 100 million pieces of content served up every minute on its site.[1]  People on Facebook install 20 million applications every day.[2]
  • YouTube: every minute, 48 hours of video were uploaded.  According to Peter Kafka of The Wall Street Journal, “That’s up 37 percent in the last six months, and 100 percent in the last year. YouTube says the increase comes in part because it’s easier than ever to upload stuff, and in part because YouTube has started embracing lengthy live streaming sessions. YouTube users are now watching more than 3 billion videos a day. That’s up 50 percent from the last year, which is also a huge leap, though the growth rate has declined a bit: Last year, views doubled from a billion a day to two billion in six months.”[3]
  • eBay is now the world’s largest online marketplace with more than 90 million active users globally and $60 billion in transactions annually, or $2,000 every second.[4]
  • Google: 34,000 searches per second (2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month).[5]
  • Twitter already has 300 million users producing 140 million Tweets a day, which adds up to a billion Tweets every 8 days[6] (@ 1,600 Tweets per second)  “On the first day Twitter was made available to the public, 224 tweets were sent. Today, that number of updates are posted at least 10 times a second.”[7]
  • Apple: more than 10 billion apps have been downloaded from its App Store by customers in over 77 countries.[8] According to Chris Burns of SlashGear, “Currently it appears that another thousand apps are downloaded every 9 seconds in the Android Marketplace while every 3 seconds another 1,000 apps are downloaded in the App Store.”
  • Yelp: as of July 2011 the site hosted over 18 million user reviews.[9]
  • Wikipedia: Every six weeks, there are 10 million edits made to Wikipedia.[10]
  • “Humankind shared 65 exabytes of information in 2007, the equivalent of every person in the world sending out the contents of six newspapers every day.”[11]
  • Researchers at the San Diego Supercomputer Center at the University of California, San Diego, estimate that, in 2008, the world’s 27 million business servers processed 9.57 zettabytes, or 9,570,000,000,000,000,000,000 bytes of information.  This is “the digital equivalent of a 5.6-billion-mile-high stack of books from Earth to Neptune and back to Earth, repeated about 20 times a year.” The study also estimated that enterprise server workloads are doubling about every two years, “which means that by 2024 the world’s enterprise servers will annually process the digital equivalent of a stack of books extending more than 4.37 light-years to Alpha Centauri, our closest neighboring star system in the Milky Way Galaxy.”[12]
  • According to Dave Evans, Cisco’s chief futurist and chief technologist for the Cisco Internet Business Solutions Group, about 5 exabytes of unique information were created in 2008. That’s 1 billion DVDs. Fast forward three years and we are creating 1.2 zettabytes, with one zettabyte equal to 1,024 exabytes. “This is the same as every person on Earth tweeting for 100 years, or 125 million years of your favorite one-hour TV show,” says Evans. Our love of high-definition video accounts for much of the increase. By Cisco’s count, 91% of Internet data in 2015 will be video.[13]

[1]     Ken Deeter, “Live Commenting: Behind the Scenes,”, February 7, 2011,

[4]     eBay, “Who We Are,”

[5]     Matt McGee, “By The Numbers: Twitter Vs. Facebook Vs. Google Buzz,” SearchEngineLand, February 23, 2010,

[7]     Nicholas Jackson, “Infographic: A Look at Twitter’s Explosive Five-Year History,” The Atlantic, July 18, 2011,

[9]     “10 Things You Should Know about Yelp,”, [accessed July 18, 2011]

[10]   “Wikipedia: Edit Growth Measured in Time between Every 10,000,000th Edit,”

[11]   Martin Hilbert and Priscila Lopez, “The World’s Technological Capacity to Store, Communicate, and Compute Information,” Science, February 10, 2011,

[12]   Rex Graham, “Business Information Consumption: 9,570,000,000,000,000,000,000 Bytes per Year,” UC San Diego News Center, April 6, 2011,

[13]   Julie Bort, “10 Technologies That Will Change the World in the Next 10 Years,” Network World, July 15, 2011,

My latest Forbes column is a celebration of 47 U.S.C. §230, otherwise known as “Section 230.” Sec. 230 turns 15 years old this year and I argue that this important law has “helped foster the abundance of informational riches that lies at our fingertips today” and has served as “the foundation of our Internet freedoms.”  Sadly, however, few people have even heard of it. Worse yet, as I note in my essay, this important law is under attack from various academics and organizations who want it modified to address a variety of online problems. But, as I note:

If the threat of punishing liability is increased, the chilling effect on the free exchange of views and information would likely be quite profound. Many site administrators would immediately start removing massive amounts of content to avoid liability. More simply, they might just shut down any interactive features on their sites or limit service in other ways.

Head over to Forbes to read the rest. And here’s a graphic I put together illustrating all the new fault lines in the war against Sec. 230. It will be included in a new paper on the issue that I am wrapping up right now.

Last November, I penned an essay on these pages about the COICA legislation that had recently been approved unanimously by the U.S. Senate Judiciary Committee. While I praised Congress’s efforts to tackle the problem of “rogue websites” — sites dedicated to trafficking in counterfeit goods and/or distributing copyright infringing content — I warned that the bill lacked crucial safeguards to protect free speech and due process, as several dozen law professors had also cautioned. Thus, I suggested several changes to the legislation that would have limited its scope to truly bad actors while reducing the probability of burdening protected expression through “false positives.” Thanks in part to the efforts of Sen. Ron Wyden (D-Ore.), COICA never made it a floor vote last session.

Today, three U.S. Senators introduced a similar bill, entitled the PROTECT IP Act (bill text), which, like COICA, establishes new mechanisms for combating Internet sites that are “dedicated to infringing activities.” I’m glad to see that lawmakers adopted several of my suggestions, making the PROTECT IP Act a major improvement over its predecessor. While the new bill still contains some potentially serious problems, on net, it represents a more balanced approach to fighting online copyright and trademark infringement while recognizing fundamental civil liberties.

Continue reading →

POLITICO reports that a bill aimed at combating so-called “rogue websites” will soon be introduced in the U.S. Senate by Sen. Patrick Leahy. The legislation, entitled the PROTECT IP Act, will substantially resemble COICA (PDF), a bill that was reported unanimously out of the Senate Judiciary Committee late last year but did not reach a floor vote. As more details about the new bill emerge, we’ll likely have much more to say about it here on TLF.

I discussed my concerns about and suggested changes to the COICA legislation here last November; the PROTECT IP Act reportedly contains several new provisions aimed at mitigating concerns about the statute’s breadth and procedural protections. However, as Mike Masnick points out on Techdirt, the new bill — unlike COICA — contains a private right of action, although that right may not permit rights holders to disable infringing domain names. Also unlike COICA, the PROTECT IP Act would apparently require search engines to cease linking to domain names that a court has deemed to be “dedicated to infringing activities.”

For a more in-depth look at this contentious and complex issue, check out the panel discussion that the Competitive Enterprise Institute and TechFreedom hosted last month. Our April 7 event explored the need for, and concerns about, legislative proposals to combat websites that facilitate and engage in unlawful counterfeiting and copyright infringement. The event was moderated by Juliana Gruenwald of National Journal. The panelists included me, Danny McPherson of VeriSign, Tom Sydnor of the Association for Competitive Technology, Dan Castro of the Information Technology & Innovation Foundation, David Sohn of the Center for Democracy & Technology, and Larry Downes of TechFreedom.

CEI-TechFreedom Event: What Should Lawmakers Do About Rogue Websites? from CEI Video on Vimeo.

A federal judge in Illinois has refused to allow a plaintiff to match IP addresses to individual names in a piracy case, indicating that use of IP addresses without any other evidence is too unreliable in identifying actual perpetrators, and as such, violates the rights of those caught in what he termed a “fishing expedition.”

In his decision, Judge Harold Baker pointed to one of several recent cases where paramilitary-type police raids on the residences of persons suspected of downloading child pornography that turned up nothing. What had happened was that real culprit had used that household’s unsecured wireless Internet connection.

Continue reading →

User-driven websites — also known as online intermediaries — frequently come under fire for disabling user content due to bogus or illegitimate takedown notices. Facebook is at the center of the latest controversy involving a bogus takedown notice. On Thursday morning, the social networking site disabled Ars Technica’s page after receiving a DMCA takedown notice alleging the page contained copyright infringing material. While details about the claim remain unclear, given that Facebook restored Ars’s page yesterday evening, it’s a safe bet that the takedown notice was without merit.

Understandably, Ars Technica wasn’t exactly pleased that its Facebook page — one of its top sources of incoming traffic — was shut down for seemingly no good reason. Ars was particularly disappointed by how Facebook handled the situation. In an article posted yesterday (and updated throughout the day), Ars co-founder Ken Fisher and senior editor Jacqui Cheng chronicled their struggle in getting Facebook to simply discuss the situation with them and allow Ars to respond to the takedown notice.

Facebook took hours to respond to Ars’s initial inquiry, and didn’t provide a copy of takedown notice until the following day. Several other major tech websites, including ReadWriteWeb and TheNextWeb, also covered the issue, noting that Ars Technica is the latest in a series of websites to have suffered from their Facebook page being wrongly disabled. In a follow-up article posted today, Ars elaborated on what happened and offered some tips to Facebook on how it could have better handled the situation.

It’s totally fair to criticize how Facebook deals with content takedown requests. Ars is right that the company could certainly do a much better job of handling the process, and Facebook will hopefully re-evaluate its procedures in light of this widely publicized snafu. In calling out Facebook’s flawed approach to dealing with takedown requests, however, Ars Technica doesn’t do justice to the larger, more fundamental problem of bogus takedown notices.

Continue reading →

When it comes to information control, everybody has a pet issue and everyone will be disappointed when law can’t resolve it. I was reminded of this truism while reading a provocative blog post yesterday by computer scientist Ben Adida entitled “(Your) Information Wants to be Free.” Adida’s essay touches upon an issue I have been writing about here a lot lately: the complexity of information control — especially in the context of individual privacy. [See my essays on “Privacy as an Information Control Regime: The Challenges Ahead,” “And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars,” and this recent FTC filing.]

In his essay, Adida observes that:

In 1984, Stewart Brand famously said that information wants to be free. John Perry Barlow reiterated it in the early 90s, and added “Information Replicates into the Cracks of Possibility.” When this idea was applied to online music sharing, it was cool in a “fight the man!” kind of way. Unfortunately, information replication doesn’t discriminate: your personal data, credit cards and medical problems alike, also want to be free. Keeping it secret is really, really hard.

Quite right. We’ve been debating the complexities of information control in the Internet policy arena for the last 20 years and I think we can all now safely conclude that information control is hugely challenging regardless of the sort of information in question. As I’ll note below, that doesn’t mean control is impossible, but the relative difficulty of slowing or stopping information flows of all varieties has increased exponentially in recent years.

But Adida’s more interesting point is the one about the selective morality at play in debates over information control. That is, people generally expect or favor information freedom in some arenas, but then get pretty upset when they can’t crack down on information flows elsewhere. Indeed, some people can get downright religious about the whole “information-wants-to-be-free” thing in some cases and then, without missing a beat, turn around and talk like information totalitarians in the next breath. Continue reading →

Here is [a chart]( of the Bitcoin-dollar exchange rate for the past six months. The arrow notes the date [my column on the virtual currency]( was published in The day after that piece was published, the Bitcoin exchange rate [reached an all time high at $1.19]( Yesterday, just over a week later, [it was pushing $2](

A wiser fella than myself once said, correlation is not causation, and no doubt my article was just a contributing factor in Bitcoin’s recent run-up. It’s simply getting increasingly mainstream attention, and with that more speculators and speculation about mainstream adoption. The chart above lends a lot of credence to Tim Lee’s [bubble critique](, so I wanted to make sure I wasn’t giving that argument short shrift.

There may well be a Bitcoin bubble, and it may even be likely. But again, I think that misses the greater point about what Bitcoin represents. Bitcoin may be tulips and the bubble may burst, but the innovation—distributed, anonymous payments—is here to stay. Napster went bust, but its innovation presaged BitTorrent, which is here to stay. Could the Bitcoin project itself go bust? Certainly, but the innovation solving the double-spending problem I’ve been talking about, will be taken up and improved by others, just as other picked up and ran with Napster’s innovation.

I want to start thinking through the practical and legal implications of that innovation. If you don’t think the innovation could ever allow for a useful store of value, then mine is a fool’s errand. I guess I’m betting on the success of a censorship resistant currency.

I’m gratified that my recent writing on the Bitcoin virtual currency project has stirred much conversation and I thought I’d take a moment to continue that conversation.

Tim Lee has written two posts critiquing the viability of Bitcoin from the supply and demand side. Dan Rothschild has responded in part. Tyler Cower also weighed in.

To address Tim I’ll simply say this: Do I think Bitcoin will replace the dollar? No. Might Bitcoin have certain systemic design flaws that might impede its success? Quite possibly. Will Bitcoin become the de facto, manipulation-proof currency of the internet? Who knows. Tim’s posts are a somewhat technical critique of Bitcoin’s long-term feasibility. It’s a great contribution, but since I’m neither a gold bug nor a Bitcoin booster per se, I don’t find it especially interesting.

That all said, what I do think is revolutionary about Bitcoin is that its developers have solved, without the use of a middleman, the double-spending problem faced by virtual currencies. That gives us license to realistically imagine a world without regulable financial intermediaries online.

While Tim overlooks what makes Bitcoin radical, Tom Sydnor groks it viscerally. Writing in a lengthy comment on my post, Tom expresses dismay at what Bitcoin represents and offers what I would, with apologies, characterize as the cyber-conservative response. Continue reading →