Posts tagged as:

The European Commission has a new report out today on “Implementation of the Safer Social Networking Principles for the EU.” It’s a status report on the implementation of “Safer Social Networking Principles for the EU“, a “self-regulatory” agreement the EC brokered with 17 social networking sites and other online operators back in 2009. (Co-regulatory would be more accurate here, since the EC is steering, and industry is simply rowing.) The goal was to make the profiles of minors more private and provide other safeguards.

Generally speaking, the EC’s evaluation suggests that great progress has been made, although there’s always room for improvement. For example, the report found that “13 out of the 14 sites tested provide safety information, guidance and/or educational materials specifically targeted at minors;” “Safety information for minors is quite clear and age-appropriate on all sites that provide it, good progress since the first assessment last year; “Reporting mechanisms are more effective now than in 2010;” and most sites have improved Terms of Use that are easy for minors to understand and/or a child-friendly version of the Terms of Use or Code of Conduct; and many “provide safety information for children and parents which is both easy to find and to understand.” Again, there’s always room for improvement, but the general direction is encouraging, especially considering how new many of these sites are.

Unfortunately, Neelie Kroes, Vice President of the European Commission for the Digital Agenda, spun the report in the opposite direction. She issued a statement saying: Continue reading →

I enjoyed this Wall Street Journal essay by Daniel H. Wilson on “The Terrifying Truth About New Technology.”  It touches on many of the themes I’ve discussed here in my essays on techno-panics, fears about information overload, and the broader battle throughout history between technology optimists and pessimists regarding the impact of new technologies on culture, life, and learning. Wilson correctly notes that:

The fear of the never-ending onslaught of gizmos and gadgets is nothing new. The radio, the telephone, Facebook — each of these inventions changed the world. Each of them scared the heck out of an older generation. And each of them was invented by people who were in their 20s.

He continues:

Young people adapt quickly to the most absurd things. Consider the social network Foursquare, in which people not only willingly broadcast their location to the world but earn goofy virtual badges for doing so. My first impulse was to ignore Foursquare—for the rest of my life, if I have to. And that’s the problem. As we get older, the process of adaptation slows way down. Unfortunately, we depend on alternating waves of assimilation and accommodation to adapt to a constantly changing world. For [developmental psychologist Jean] Piaget, this balance between what’s in the mind and what’s in the environment is called equilibrium. It’s pretty obvious when equilibrium breaks down. For example, my grandmother has phone numbers taped to her cellphone. Having grown up with the Rolodex (a collection of numbers stored next to the phone), she doesn’t quite grasp the concept of putting the numbers in the phone. Why are we so nostalgic about the technology we grew up with? Old people say things like: “This new technology is stupid. I liked (new, digital) technology X better when it was called (old, analog) technology Y. Why, back in my day….” Which leads inexorably to, “I just don’t get it.”

There’s a simple explanation for this phenomenon: “adventure window.” At a certain age, that which is familiar and feels safe becomes more important to you than that which is new, different, and exciting. Think of it as “set-in-your-ways syndrome.”

Continue reading →

It might be tempting to laugh at France’s ban on words like “Facebook” and Twitter” in the media. France’s Conseil Supérieur de l’Audiovisuel recently ruled that specific references to these sites (in stories not about them) would violate a 1992 law banning “secret” advertising. The council was created in 1989 to ensure fairness in French audiovisual communications, such as in allocation of television time to political candidates, and to protect children from some types of programming.

Sure, laugh at the French. But not for too long. The United States has similarly busy-bodied regulators, who, for example, have primly regulated such advertising themselves. American regulators carefully oversee non-secret advertising, too. Our government nannies equal the French in usurping parents’ decisions about children’s access to media. And the Federal Communications Commission endlessly plays footsie with speech regulation.

In the United States, banning words seems too blatant an affront to our First Amendment, but the United States has a fairly lively “English only” movement. Somehow, regulating an entire communications protocol doesn’t have the same censorious stink.

So it is that our Federal Communications Commission asserts a right to regulate the delivery of Internet service. The protocols on which the Internet runs are communications protocols, remember. Withdraw private control of them and you’ve got a more thoroughgoing and insidious form of speech control: it may look like speech rights remain with the people, but government controls the medium over which the speech travels.

The government has sought to control protocols in the past and will continue to do so in the future. The “crypto wars,” in which government tried to control secure communications protocols, merely presage struggles of the future. Perhaps the next battle will be over BitCoin, an online currency that is resistant to surveillance and confiscation. In BitCoin, communications and value transfer are melded together. To protect us from the scourge of illegal drugs and the recently manufactured crime of “money laundering,” governments will almost certainly seek to bar us from trading with one another and transferring our wealth securely and privately.

So laugh at France. But don’t laugh too hard. Leave the smugness to them.

John Naughton, a professor at the Open University in the U.K. and a columnist for the U.K. Guardian, has a new essay out entitled “Only a Fool or Nicolas Sarkozy Would Go to War with Facebook.” I enjoyed it because it touches upon two interrelated concepts that I’ve spent years writing about: “moral panic” and “third-person effect hypothesis” (although Naughton doesn’t discuss the latter by name in his piece.) To recap, let’s define those terms:

“Moral Panic” / “Techno-Panic: Christopher Ferguson, a professor at Texas A&M’s Department of Behavioral, Applied Sciences and Criminal Justice, offers the following definition: “A moral panic occurs when a segment of society believes that the behavior or moral choices of others within that society poses a significant risk to the society as a whole.” By extension, a “techno-panic” is simply a moral panic that centers around societal fears about a specific contemporary technology (or technological activity) instead of merely the content flowing over that technology or medium.

Third-Person Effect Hypothesis“: First formulated by psychologist W. Phillips Davison in 1983, “this hypothesis predicts that people will tend to overestimate the influence that mass communications have on the attitudes and behavior of others. More specifically, individuals who are members of an audience that is exposed to a persuasive communication (whether or not this communication is intended to be persuasive) will expect the communication to have a greater effect on others than on themselves.” While originally formulated as an explanation for how people convinced themselves “media bias” existed where none was present, the third-person-effect hypothesis has provided an explanation for other phenomenon and forms of regulation, especially content censorship. Indeed, one of the most intriguing aspects about censorship efforts historically is that it is apparent that many censorship advocates desire regulation to protect others, not themselves, from what they perceive to be persuasive or harmful content. That is, many people imagine themselves immune from the supposedly ill effects of “objectionable” material, or even just persuasive communications or viewpoints they do not agree with, but they claim it will have a corrupting influence on others.

All my past essays about moral panics and third-person effect hypothesis can be found here. These theories are also frequently on display in the work of some of the “Internet pessimists” I have written about here, as well as in many bills and regulatory proposals floated by lawmakers. Which brings us back to the Naughton essay.

Continue reading →

One of my favorite topics lately has been the challenges faced by information control regimes. Jerry Brito and I are writing a big paper on this issue right now. Part of the story we tell is that the sheer scale / volume of modern information flows is becoming so overwhelming that it raises practical questions about just how effective any info control regime can be. [See our recent essays on the topic: 1, 23, 4, 5.]  As we continue our research, we’ve been attempting to unearth some good metrics / factoids to help tell this story.  It’s challenging because there aren’t many consistent data sets depicting online data growth over time and some of the best anecdotes from key digital companies are only released sporadically. Anyway, I’d love to hear from others about good metrics and data sets that we should be examining.  In the meantime, here are a few fun facts I’ve unearthed in my research so far. Please let me know if more recent data is available. [Note: Last updated 7/18/11]

  • Facebook: users submit around 650,000 comments on the 100 million pieces of content served up every minute on its site.[1]  People on Facebook install 20 million applications every day.[2]
  • YouTube: every minute, 48 hours of video were uploaded.  According to Peter Kafka of The Wall Street Journal, “That’s up 37 percent in the last six months, and 100 percent in the last year. YouTube says the increase comes in part because it’s easier than ever to upload stuff, and in part because YouTube has started embracing lengthy live streaming sessions. YouTube users are now watching more than 3 billion videos a day. That’s up 50 percent from the last year, which is also a huge leap, though the growth rate has declined a bit: Last year, views doubled from a billion a day to two billion in six months.”[3]
  • eBay is now the world’s largest online marketplace with more than 90 million active users globally and $60 billion in transactions annually, or $2,000 every second.[4]
  • Google: 34,000 searches per second (2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month).[5]
  • Twitter already has 300 million users producing 140 million Tweets a day, which adds up to a billion Tweets every 8 days[6] (@ 1,600 Tweets per second)  “On the first day Twitter was made available to the public, 224 tweets were sent. Today, that number of updates are posted at least 10 times a second.”[7]
  • Apple: more than 10 billion apps have been downloaded from its App Store by customers in over 77 countries.[8] According to Chris Burns of SlashGear, “Currently it appears that another thousand apps are downloaded every 9 seconds in the Android Marketplace while every 3 seconds another 1,000 apps are downloaded in the App Store.”
  • Yelp: as of July 2011 the site hosted over 18 million user reviews.[9]
  • Wikipedia: Every six weeks, there are 10 million edits made to Wikipedia.[10]
  • “Humankind shared 65 exabytes of information in 2007, the equivalent of every person in the world sending out the contents of six newspapers every day.”[11]
  • Researchers at the San Diego Supercomputer Center at the University of California, San Diego, estimate that, in 2008, the world’s 27 million business servers processed 9.57 zettabytes, or 9,570,000,000,000,000,000,000 bytes of information.  This is “the digital equivalent of a 5.6-billion-mile-high stack of books from Earth to Neptune and back to Earth, repeated about 20 times a year.” The study also estimated that enterprise server workloads are doubling about every two years, “which means that by 2024 the world’s enterprise servers will annually process the digital equivalent of a stack of books extending more than 4.37 light-years to Alpha Centauri, our closest neighboring star system in the Milky Way Galaxy.”[12]
  • According to Dave Evans, Cisco’s chief futurist and chief technologist for the Cisco Internet Business Solutions Group, about 5 exabytes of unique information were created in 2008. That’s 1 billion DVDs. Fast forward three years and we are creating 1.2 zettabytes, with one zettabyte equal to 1,024 exabytes. “This is the same as every person on Earth tweeting for 100 years, or 125 million years of your favorite one-hour TV show,” says Evans. Our love of high-definition video accounts for much of the increase. By Cisco’s count, 91% of Internet data in 2015 will be video.[13]


[1]     Ken Deeter, “Live Commenting: Behind the Scenes,” Facebook.com, February 7, 2011, http://www.facebook.com/note.php?note_id=496077348919.
[4]     eBay, “Who We Are,” http://www.ebayinc.com/who
[5]     Matt McGee, “By The Numbers: Twitter Vs. Facebook Vs. Google Buzz,” SearchEngineLand, February 23, 2010, http://searchengineland.com/by-the-numbers-twitter-vs-facebook-vs-google-buzz-36709
[7]     Nicholas Jackson, “Infographic: A Look at Twitter’s Explosive Five-Year History,” The Atlantic, July 18, 2011, http://www.theatlantic.com/technology/archive/2011/07/infographic-a-look-at-twitters-explosive-five-year-history/242070
[9]     “10 Things You Should Know about Yelp,” Yelp.com, http://www.yelp.com/about [accessed July 18, 2011]
[10]   “Wikipedia: Edit Growth Measured in Time between Every 10,000,000th Edit,” http://en.wikipedia.org/wiki/User:Katalaveno/TBE
[11]   Martin Hilbert and Priscila Lopez, “The World’s Technological Capacity to Store, Communicate, and Compute Information,” Science, February 10, 2011, http://annenberg.usc.edu/News%20and%20Events/News/110210Hilbert.aspx.
[12]   Rex Graham, “Business Information Consumption: 9,570,000,000,000,000,000,000 Bytes per Year,” UC San Diego News Center, April 6, 2011, http://ucsdnews.ucsd.edu/newsrel/general/04-05BusinessInformation.asp.
[13]   Julie Bort, “10 Technologies That Will Change the World in the Next 10 Years,” Network World, July 15, 2011, http://m.networkworld.com/news/2011/071511-cisco-futurist.html?page=1

Sometimes free-marketeers are branded “free market fundamentalists” or something similar by their ideological opponents. The implication is that our preference for a society in which free people interact voluntarily to organize society’s resources is an irrational desire or a religion. I’m sure there’s a similar epithet we give to nanny staters—oh, there’s one, “nanny staters”—who we believe to have excessive faith in government solutions.

Market processes have decent theoretical explanations, such as Friedrich Hayek’s essay, “The Use of Knowledge in Society.” It’s not the easiest read, but lovers of the Internet, who see the genius of its decentralization, should see similar genius in markets as a method for discovering society’s wants and uniting to achieve them—without coercion.

From time to time, we also point out examples of how market processes work to deliver even intangible goods like privacy. So, for example, I noted market pressure against Facebook’s privacy-invasive “beacon” advertising system in 2007. Berin pointed out in 2008 that market forces caused Google to remove an oppressive clause from the Chrome end user license agreement. Google competitor Cuil made a run at the search behemoth based on privacy that year, something I noted briefly then (and Ryan and I discussed in the comments). I’ve also noted the failure of many to find true market failures.

As Cuil illustrates, not every privacy play works, but companies routinely pitch the public on the privacy merits of their products and the demerits of others’. It’s not a highly visible process, but it sometimes gets a little more visible when it fails. So thank you, Facebook, for a big #FAIL in the privacy competition area this week. You provide us a nice lesson in one of the ways markets work to meet consumer privacy demands.

You see, Facebook hired PR firm Burson-Marsteller to do a whisper campaign on the privacy demerits of a Google product called Social Circle. By pushing the story of privacy problems with a Google effort in the social networking space, Facebook hoped to thwart a competitor that it fears. Success would also be a success for privacy protection. If Google were doing something wrong, and Facebook were to make the case to the public, Google would lose face and it would lose business. Most importantly, a privacy-invasive product—as determined by public consensus—would recede. Markets often work by silently shunning products that don’t cut it. (Again, hard to see if you’re not looking for it, or if you’re committed to disbelieving it.)

Facebook appears not to have succeeded. Prickly privacy advocate Chris Soghoian outed the Burson-Marsteller campaign. Dan Lyons of the Daily Beast cornered Facebook into confessing its role in the attack on Google. And privacy commentator Kashmir Hill gives the privacy issues with Social Circle a “meh.”

When it happens differently, you get a change in a service like Social Circle—the way Facebook changed “beacon” and Google changed the Chrome EULA. These are anecdotes, and they reflect but one element of the market processes that shape products and services. But it’s something that “market denialists” should consider as they dig deep to explain to themselves and others how various mechanisms in our society work.

User-driven websites — also known as online intermediaries — frequently come under fire for disabling user content due to bogus or illegitimate takedown notices. Facebook is at the center of the latest controversy involving a bogus takedown notice. On Thursday morning, the social networking site disabled Ars Technica’s page after receiving a DMCA takedown notice alleging the page contained copyright infringing material. While details about the claim remain unclear, given that Facebook restored Ars’s page yesterday evening, it’s a safe bet that the takedown notice was without merit.

Understandably, Ars Technica wasn’t exactly pleased that its Facebook page — one of its top sources of incoming traffic — was shut down for seemingly no good reason. Ars was particularly disappointed by how Facebook handled the situation. In an article posted yesterday (and updated throughout the day), Ars co-founder Ken Fisher and senior editor Jacqui Cheng chronicled their struggle in getting Facebook to simply discuss the situation with them and allow Ars to respond to the takedown notice.

Facebook took hours to respond to Ars’s initial inquiry, and didn’t provide a copy of takedown notice until the following day. Several other major tech websites, including ReadWriteWeb and TheNextWeb, also covered the issue, noting that Ars Technica is the latest in a series of websites to have suffered from their Facebook page being wrongly disabled. In a follow-up article posted today, Ars elaborated on what happened and offered some tips to Facebook on how it could have better handled the situation.

It’s totally fair to criticize how Facebook deals with content takedown requests. Ars is right that the company could certainly do a much better job of handling the process, and Facebook will hopefully re-evaluate its procedures in light of this widely publicized snafu. In calling out Facebook’s flawed approach to dealing with takedown requests, however, Ars Technica doesn’t do justice to the larger, more fundamental problem of bogus takedown notices.

Continue reading →

Jack Shafer brought to my attention this terrific new Politico column by Michael Kinsley entitled, “How Microsoft Learned ABCs of D.C.”  In the editorial, Kinsley touches on some of the same themes I addressed in my recent piece here “On Facebook ‘Normalizing Relations’ with Washington” as well as in my Cato Institute essay from last year on”The Sad State of Cyber-Politics.”  Kinsley notes how Microsoft was originally bashed by many for not getting into the D.C. lobbying game early enough:

there even was a feeling that, in refusing to play the Washington game, Microsoft was being downright unpatriotic. Look, buddy, there is an American way of doing things, and that American way includes hiring lobbyists, paying lawyers vast sums by the hour, throwing lavish parties for politicians, aides, journalists and so on. So get with the program.
But after doing exactly that, Kinsley notes, the company got blasted for for being too aggressive in D.C.!
So that’s what Microsoft did. It moved its “government affairs” office out of distant Chevy Chase and into the downtown K Street corridor. It bulked up on lawyers and hired the best-connected lobbyists. Soon, Microsoft was coming under criticism for being heavy-handed in its attempts to buy influence.
“But the sad thing is that it seems to have worked. Microsoft is no longer Public Enemy No. 1,” Kinsley notes, and he continues on to reiterate a point I made in my last two essays: Google is the Great Satan now! Continue reading →

The New York Times reports that, “Facebook is hoping to do something better and faster than any other technology start-up-turned-Internet superpower. Befriend Washington. Facebook has layered its executive, legal, policy and communications ranks with high-powered politicos from both parties, beefing up its firepower for future battles in Washington and beyond.”  The article goes on to cite a variety of recent hires by Facebook, its new DC office, and its increased political giving.

This isn’t at all surprising and, in one sense, it’s almost impossible to argue with the logic of Facebook deciding to beef up its lobbying presence inside the Beltway. In fact, later in the Times story we hear the same two traditional arguments trotted out for why Facebook must do so: (1) Because everyone’s doing it! and (2) You don’t want be Microsoft, do you?   But I’m not so sure whether “normalizing relations” with Washington is such a good idea for Facebook or other major tech companies, and I’m certainly not persuaded by the logic of those two common refrains regarding why every tech company must rush to Washington.

Continue reading →

Last night, Declan McCullagh of CNet posted two tweets related to the concerns already percolating in the privacy community about a new Apple and Android app called “Color,” which allows those who use it to take photos and videos and instantaneously share them with other people within a 150-ft radius to create group photo/video albums. In other words, this new app marries photography, social networking, and geo-location. And because the app’s default setting is to share every photo and video you snap openly with the world, Declan wonders “How long will it take for the #privacy fundamentalists to object to Color.com’s iOS/Android apps?” After all, he says facetiously, “Remember: market choices can’t be trusted!”  He then reminds us that there’s really nothing new under the privacy policy sun and that we’ve seen this debate unfold before, such as when Google released its GMail service to the world back in 2004.

Indeed, for me, this debate has a “Groundhog Day” sort of feel to it.  I feel like I’ve been fighting the same fight with many privacy fundamentalists for the past decade. The cycle goes something like this: Continue reading →