Miscellaneous

My colleague Cord Blomquist brought to my attention this amazing info-graphic, which depicts the stunning volume of activity unfolding every 60 seconds online. [Click on the image to enlarge it.] It appears the graphic was created by Go-Globe.com, a web design firm, although I’ve not been able to find the original. [Update: The folks at Go-Globe got in touch with me and sent me the original link. Thanks Go-Globe!] Cord found it in this collection of “35 Cool Infographics for Web and Graphic Designers.”  I find this graphic especially interesting because it helps bolster the work I’ve been doing lately with Jerry Brito about the challenges faced by information control regimes. [See our recent essays on the topic: 1, 23, 4, 5.]  Most recently, I put together a list of “Some Metrics Regarding the Volume of Online Activity” but I’d been searching for a really excellent visualization to help tell this story and this is probably the best one I’ve ever seen. Absolutely amazing numbers.

The U.S. government doesn’t need to pick winners and losers and the last thing we should think about doing is messing up the Internet with inappropriate regulation.

Amen, sister! The above quote comes from Victoria Espinel, the U.S. Intellectual Property Enforcement Coordinator for the Office of Management and Budget (AKA the Copyright Czar), speaking at the World Copyright Summit in Brussels about how corporate innovation is often more effective than laws. She went on to explain that the cloud-based music services now offered by Apple, Amazon, and Google “may have the effect of reducing privacy by giving value to consumers …” Espinel is an Obama appointee, which calls into question the concerns voiced a year ago that the RIAA is taking over the Department of Justice.

The next stop on her speaking tour should be the Federal Communications Commission.

In my latest weekly Forbes column is entitled “The Internet Isn’t Killing Our Culture or Democracy” and it’s a short review of the new book, The Filter Bubble: What the Internet is Hiding from You, by MoveOn.org board president Eli Pariser. As I note in my essay, Pariser’s book covers some very familiar ground already plowed by others in the burgeoning Internet pessimism movement:

[The Filter Bubble] restates a thesis developed a decade ago in both Cass Sunstein’s Republic.com and Andrew L. Shapiro’s The Control Revolution, that increased personalization is breeding a dangerous new creature—Anti-Democratic Man. “Democracy requires citizens to see things from one another’s point of view,” Pariser notes, “but instead we’re more and more enclosed in our own bubbles.”  Pariser worries that personalized digital “filters” like Facebook, Google, Twitter, Pandora, and Netflix are narrowing our horizons about news and culture and leaving “less room for the chance encounters that bring insights and learning.” “Technology designed to give us more control over our lives is actually taking control away,” he fears.

Pariser joins a growing brigade of Internet pessimists. Almost every year for the past decade a new book has been published warning that the Internet is making us stupid, debasing our culture, or destroying social interaction.  Many of these Net pessimists—whose ranks include Andrew Keen (The Cult of the Amateur), Lee Siegel (Against the Machine), Jaron Lanier (You Are Not a Gadget) and Nicholas Carr (The Shallows)—lament the rise of “The Daily Me,” or the rise of hyper-personalized news, culture, and information. They claim increased information and media customization will lead to close-mindedness, corporate brainwashing, an online echo-chamber, or even the death of deliberative democracy.

If you’ve read anything I’ve written on this topic in recent years, you will not be surprised to hear that I disagree with Pariser and these other Net pessimists when it comes to fears about hyper-personalization and user customization. As I noted in my recent book chapter, “ The Case for Internet Optimism, Part 1 – Saving the Net From Its Detractors“: Continue reading →

One of my favorite topics lately has been the challenges faced by information control regimes. Jerry Brito and I are writing a big paper on this issue right now. Part of the story we tell is that the sheer scale / volume of modern information flows is becoming so overwhelming that it raises practical questions about just how effective any info control regime can be. [See our recent essays on the topic: 1, 23, 4, 5.]  As we continue our research, we’ve been attempting to unearth some good metrics / factoids to help tell this story.  It’s challenging because there aren’t many consistent data sets depicting online data growth over time and some of the best anecdotes from key digital companies are only released sporadically. Anyway, I’d love to hear from others about good metrics and data sets that we should be examining.  In the meantime, here are a few fun facts I’ve unearthed in my research so far. Please let me know if more recent data is available. [Note: Last updated 7/18/11]

  • Facebook: users submit around 650,000 comments on the 100 million pieces of content served up every minute on its site.[1]  People on Facebook install 20 million applications every day.[2]
  • YouTube: every minute, 48 hours of video were uploaded.  According to Peter Kafka of The Wall Street Journal, “That’s up 37 percent in the last six months, and 100 percent in the last year. YouTube says the increase comes in part because it’s easier than ever to upload stuff, and in part because YouTube has started embracing lengthy live streaming sessions. YouTube users are now watching more than 3 billion videos a day. That’s up 50 percent from the last year, which is also a huge leap, though the growth rate has declined a bit: Last year, views doubled from a billion a day to two billion in six months.”[3]
  • eBay is now the world’s largest online marketplace with more than 90 million active users globally and $60 billion in transactions annually, or $2,000 every second.[4]
  • Google: 34,000 searches per second (2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month).[5]
  • Twitter already has 300 million users producing 140 million Tweets a day, which adds up to a billion Tweets every 8 days[6] (@ 1,600 Tweets per second)  “On the first day Twitter was made available to the public, 224 tweets were sent. Today, that number of updates are posted at least 10 times a second.”[7]
  • Apple: more than 10 billion apps have been downloaded from its App Store by customers in over 77 countries.[8] According to Chris Burns of SlashGear, “Currently it appears that another thousand apps are downloaded every 9 seconds in the Android Marketplace while every 3 seconds another 1,000 apps are downloaded in the App Store.”
  • Yelp: as of July 2011 the site hosted over 18 million user reviews.[9]
  • Wikipedia: Every six weeks, there are 10 million edits made to Wikipedia.[10]
  • “Humankind shared 65 exabytes of information in 2007, the equivalent of every person in the world sending out the contents of six newspapers every day.”[11]
  • Researchers at the San Diego Supercomputer Center at the University of California, San Diego, estimate that, in 2008, the world’s 27 million business servers processed 9.57 zettabytes, or 9,570,000,000,000,000,000,000 bytes of information.  This is “the digital equivalent of a 5.6-billion-mile-high stack of books from Earth to Neptune and back to Earth, repeated about 20 times a year.” The study also estimated that enterprise server workloads are doubling about every two years, “which means that by 2024 the world’s enterprise servers will annually process the digital equivalent of a stack of books extending more than 4.37 light-years to Alpha Centauri, our closest neighboring star system in the Milky Way Galaxy.”[12]
  • According to Dave Evans, Cisco’s chief futurist and chief technologist for the Cisco Internet Business Solutions Group, about 5 exabytes of unique information were created in 2008. That’s 1 billion DVDs. Fast forward three years and we are creating 1.2 zettabytes, with one zettabyte equal to 1,024 exabytes. “This is the same as every person on Earth tweeting for 100 years, or 125 million years of your favorite one-hour TV show,” says Evans. Our love of high-definition video accounts for much of the increase. By Cisco’s count, 91% of Internet data in 2015 will be video.[13]


[1]     Ken Deeter, “Live Commenting: Behind the Scenes,” Facebook.com, February 7, 2011, http://www.facebook.com/note.php?note_id=496077348919.

[4]     eBay, “Who We Are,” http://www.ebayinc.com/who

[5]     Matt McGee, “By The Numbers: Twitter Vs. Facebook Vs. Google Buzz,” SearchEngineLand, February 23, 2010, http://searchengineland.com/by-the-numbers-twitter-vs-facebook-vs-google-buzz-36709

[7]     Nicholas Jackson, “Infographic: A Look at Twitter’s Explosive Five-Year History,” The Atlantic, July 18, 2011, http://www.theatlantic.com/technology/archive/2011/07/infographic-a-look-at-twitters-explosive-five-year-history/242070

[9]     “10 Things You Should Know about Yelp,” Yelp.com, http://www.yelp.com/about [accessed July 18, 2011]

[10]   “Wikipedia: Edit Growth Measured in Time between Every 10,000,000th Edit,” http://en.wikipedia.org/wiki/User:Katalaveno/TBE

[11]   Martin Hilbert and Priscila Lopez, “The World’s Technological Capacity to Store, Communicate, and Compute Information,” Science, February 10, 2011, http://annenberg.usc.edu/News%20and%20Events/News/110210Hilbert.aspx.

[12]   Rex Graham, “Business Information Consumption: 9,570,000,000,000,000,000,000 Bytes per Year,” UC San Diego News Center, April 6, 2011, http://ucsdnews.ucsd.edu/newsrel/general/04-05BusinessInformation.asp.

[13]   Julie Bort, “10 Technologies That Will Change the World in the Next 10 Years,” Network World, July 15, 2011, http://m.networkworld.com/news/2011/071511-cisco-futurist.html?page=1

As a rule of thumb, when I have to spend a given amount of time straightening out a company’s poor service or unscrupulous practices, I’ll spend an equivalent amount of time giving that company some payback. Today’s victim: T-Mobile. Fear the blog post.

A letter from Asurion Warranty Services arrived in my mail today thanking me for signing up for their “Premium Handset Protection Bundle” for T-Mobile phones.

Oh no I didn’t. It costs $5.99 a month for repair and replacement of my newly upgraded phone. That’s pretty much the price of a phone per year for such protections. Bad deal. I haven’t lost or damaged a phone in a decade, and I didn’t agree to get have this charge added to my phone bill.

I am on hold right now, trying to learn just how this got onto my bill. Friendly, helpful T-Mobile customer service people have told me that I should go down to the T-Mobile store where I upgraded in order to straighten this out. No I shouldn’t. T-Mobile should be straightening this out right now over the phone, with an apology and a thank you.

I am done with my 40-minute phone call, in which friendly customer service supervisor Kassidy K. (#1204178) tried to assign me the task Monday of calling the store where I upgraded my phone to get this straightened out. I explained to Kassidy K. that I’ve made the only call I need to—that’s the call we were on. Her next work-day is Wednesday, and I told her I expected to hear from her about this being cleared up.

If I have to make another call, it’s just as likely to be about returning my phone and canceling my service as getting this charge removed from my bill.

You people can argue all you want about top-down—whether the government should allow the AT&T-T-Mobile merger. I’ll do bottom-up—whether T-Mobile should get my business.

“Global Internet Governance: Research and Public Policy Challenges for the Next Decade” is the title for a conference event held May 5 and 6 at the American University School of International Service in Washington. See the full program here.

Featured will be a keynote by the NTIA head, Assistant Secretary for Commerce Lawrence Strickling. TLF-ers may be especially interested in the panel on the market for IP version 4 addresses that is emerging as the Regional Internet Registries and ICANN have depleted their free pool of IP addresses. The panel “Scarcity in IPv4 addresses” will feature representatives of the American Registry for Internet Numbers (ARIN) and Addrex/Depository, Inc., the new company that brokered the deal between Nortel and Microsoft. There will also be debates about Wikileaks and the future of the Internet Governance Forum. Academic research papers on ICANN’s Affirmation of Commitments, the role of the national governments in ICANN, the role of social media in the Middle East/North Africa revolutions, and other topics will be presented on the second day. The event was put together by the Global Internet Governance Academic Network (GigaNet). Attendance is free of charge but you are asked to register in advance.

I was surprised to read a defense of the AT&T-T-Mobile merger here.

Let’s begin at the beginning and ask why this merger is happening. It’s not as if AT&T is gaining dominance the way Google gained it in search and advertising, or the way Intel did in chips: i.e., through low prices, superior products and customer loyalty. No, last time I looked AT&T was the carrier with the lowest customer satisfaction ratings, some of the highest prices and one of the weakest network performance metrics. In my opinion there is no reason for this merger to take place other than to make life easier for AT&T by reducing competitive pressures on it. AT&T seems to be driven by the following calculus. It can either grow its services and its network under the harsh constraints of market pricing and competition, or it can attempt to reduce the field to an oligopoly with tacit price controls by using its size and financial bulk to eliminate a pest who keeps downward pressure on pricing and service requirements. I think it is rational for AT&T to try to get away with the latter. I think it’s insane for free market oriented thinkers to support it.

Larry Downes can’t argue with the extremely high level of market concentration and the scary HHI measurements that the merger would produce. So he plays the game that clever antitrust advocates always play: shift the market definition. Downes argues that “both Justice and the FCC have consistently concluded that wireless markets are essentially local.” I see no citation to any specific document in Downe’s claim, but if FTC and FCC have concluded that “local” means “my metropolitan area” they are wrong.

Let’s reacquaint everyone with a very basic but pertinent fact: 93% of the wireless users in the U.S. are served by the national carriers. This number (the proportion served by national as opposed to regional providers) has generally increased over the past decade, driven by both demand-side requirements, mergers, and supply-side efficiencies. The choices of consumers have rendered a decisive verdict negating Downes’s claim. Whether it’s voice or data, people expect and want seamless national service; a small but significant segment wants transnational compatibility as well.

Increases in the scope of service will intensify as we move from a primarily voice-driven market to a data-driven market. Carriers who have to impose roaming charges and interconnection fees on their users will not be competitive. Nor will they be able to attract the interest of the cutting-edge handset manufacturers and service developers. Can you imagine Apple signing an iPhone exclusivity deal with Cricket?

It is no accident that the dominant mobile network operators have national brands and national footprints. Most Americans travel outside their metro areas at least once a month, and go places further away than that at least once or twice a year. The 93% who choose a national carrier are rationally calculating that it pays to not have to guess the service area limits of their provider. Of course, a highly budget-constrained segment of the market will accept limited local service for a lower price. To say that those smaller providers are in the same market as a T-Mobile or AT&T is not plausible. They occupy a niche. And if one allows a major merger like this on the grounds that these tiny players constitute a competitive alternative to the likes of AT&T, what are you going to say as the last of these local providers is gobbled up?

How about that “spectrum efficiency” argument? Downes, like the AT&T Corp., makes the same claim that the old AT&T made when it said there should be no microwave-based competition in long distance. As a matter of pure engineering efficiency, it is of course true that a single, optimizing planner can make better use of limited spectrum bands than multiple, competitive providers. But then, that argument applies to any and all carriers (an AT&T-Verizon merger, for example) and to any resource – that’s why it was used by the socialists of the 19th century to claim that capitalism was inherently wasteful and inefficient. Dynamic efficiencies of competition typically benefit the public more than a few allocative efficiencies. And there are plenty of ways for AT&T to expand network capacity without merging.

But there is an interesting twist to this line of reasoning. Notice how the “market is local” claim suddenly disappears. AT&T needs to take over a smaller national rival, according to Downes, so it can “accelerate deployment of nationwide mobile broadband using LTE technology, including expansion into rural areas.” Voila! Once we start talking about spectrum efficiencies and the promotion of universal service we take a nationwide perspective, not a local one. Doesn’t this obvious contradiction make anyone suspicious?

Notice also the ominous historical overtones of AT&T’s claim that it will be able to promote universal broadband service in rural areas if it has a stronger monopoly er, if it gains consolidation efficiencies. Hey, rural areas don’t have congested spectrum, do they? What’s stopping them from doing that now? If they need help to do it, where are the subsidies going to come from? Would more market power make that possible? One cannot help but ask: Is AT&T doing this to get more spectrum or is it trying to pull a neo-Theodore Vail, and promise the government that it will subsidize rural access if it has more market power?

Bottom line: this is one step too far back to the days of a single telephone company. If you support a competitive industry where one can reasonably expect the public and legislators to rely on market forces as the primary industry regulator, this merger has to be stopped. On the other hand, if you welcome the growing pressures for regulating carriers and making them the policemen and chokepoints for network control, a bigger AT&T is just what the doctor ordered.

In the latest example of big government run amok, several politicians think they ought to be in charge of which applications you should be able to install on your smartphone.

On March 22, four U.S. Senators sent a letter to Apple, Google, and Research in Motion urging the companies to disable access to mobile device applications that enable users to locate DUI checkpoints in real time. Unsurprisingly, in their zeal to score political points, the Senators—Harry Reid, Chuck Schumer, Frank Lautenberg, and Tom Udall—got it dead wrong.

Had the Senators done some basic fact-checking before firing off their missive, they would have realized that the apps they targeted actually enhance the effectiveness of DUI checkpoints while reducing their intrusiveness. And had the Senators glanced at the Constitution – you know, that document they swore an oath to support and defend – they would have seen that sobriety checkpoint apps are almost certainly protected by the First Amendment.

While Apple has stayed mum on the issue so far, Research in Motion quickly yanked the apps in question. This is understandable; perhaps RIM doesn’t wish to incur the wrath of powerful politicians who are notorious for making a public spectacle of going after companies that have the temerity to stand up for what is right.

Google has refused to pull the DUI checkpoint finder apps from the Android app store, reports Digital Trends. Google’s steadfastness on this matter reflects well on its stated commitment to free expression and openness. Not that Google’s track record is perfect on this front – it’s made mistakes from time to time – but it’s certainly a cut above several of its competitors when it comes to defending Internet freedom. Continue reading →

Kinda cool.

Yet another hearing on privacy issues has been slated for this coming Wednesday, March 16th. This latest one is in the Senate Commerce Committee and it is entitled “The State of Online Consumer Privacy.” As I’m often asked by various House and Senate committee staffers to help think of good questions for witnesses, I’m listing a few here that I would love to hear answered by any Federal Trade Commission (FTC) or Dept. of Commerce (DoC) officials testifying. You will recall that both agencies released new privacy “frameworks” late last year and seem determined to move America toward a more “European-ized” conception of privacy regulation. [See our recent posts critiquing the reports here.] Here are a few questions that should be put to the FTC and DoC officials, or those who support the direction they are taking us. Please feel free to suggest others:

  • Before implying that we are experiencing market failure, why hasn’t either the FTC or DoC conducted a thorough review of online privacy policies to evaluate how well organizational actions match up with promises made in those policies?
  • To the extent any sort of internal cost-benefit analysis was done internally before the release of these reports, has an effort been made to quantify the potential size of the hidden “privacy tax” that new regulations like “Do Not Track” could impose on the market?
  • Has the impact of new regulations on small competitors or new entrants in the field been considered?  Has any attempt been made to quantify how much less entry / innovation would occur as a result of such regulation?
  • Were any economists from the FTC’s Economics Bureau consulted before the new framework was released? Did the DoC consult any economists?
  • Why do FTC and DoC officials believe that citing unscientific public opinions polls from regulatory advocacy organizations serves as a surrogate for serious cost-benefit analysis or an investigation into how well privacy policies actual work in the marketplace?
  • If they refuse to conduct more comprehensive internal research, have the agencies considered contracting with external economists to build a body of research looking into these issues (as the Federal Communications Commission did in a decade ago in its media ownership proceeding)?
  • Has either agency attempted to determine consumer’s “willingness to pay” for increased privacy regulation?
  • More generally, where is the “harm” and aren’t there plenty of voluntary privacy-enhancing tools out there that privacy-sensitive users can tap to shield their digital footsteps, if they feel so inclined?