The National Transportation Safety Board recommended yesterday that states ban all non-emergency use of portable electronic devices while driving, except for devices that assist the driver in driving (such as GPS). The recommendation followed the NTSB’s investigation of a tragic accident in Missouri triggered by a driver who was texting.
Personally I don’t see how someone can pay attention to the road while texting. (I’m having a hard enough time paying attention to a conference presentation while I’m typing this!) But the National Transportation Safety Board’s recommendation is a classic example of regulatory overreach based on anecdote. The NTSB wants to use one tired driver’s indefensible and extreme texting (which led to horrific results) as an excuse to ban all use of portable electronic devices while driving – including hands-free phone conversations. Before states act on this recommendation, they should carefully examine systematic evidence – not just anecdotes — to determine whether different uses of handheld devices pose different risks. They should also consider whether bans on some uses would expose drivers to risks greater than the risk the ban would prevent.
The Senate might vote this week on Sen. Hutchison’s resolution of disapproval for the FCC’s net neutrality rules. If ever there was a regulation that showed why independent regulatory agencies ought to be required to conduct solid regulatory analysis before writing a regulation, net neutrality is it.
For more than three decades, executive orders have required executive branch agencies to prepare a Regulatory Impact Analysis accompanying major regulations. One of the first things the agency is supposed to do is identify the market failure, government failure, or other systemic problem the regulation is supposed to solve. The agency ought to demonstrate a problem actually exists to show that a regulation is actually necessary.
But the net neutrality rules have virtually no analysis of a systemic problem that actually exists, and no data demonstrating that the problem is real. Instead, the FCC’s order outlines the incentives Internet providers might face to treat some traffic differently from other traffic, in a discussion heavily freighted with “could’s” and “may’s”. Then it offers up just four familiar anecdotes that have been used repeatedly to support the claim that non-neutrality is a significant threat (all four fit in paragraph 35 of the order). The FCC asserts without support that Internet providers have incentives to do these things even if they lack market power, and indeed in a footnote it dispenses with the need to consider market power: “Because broadband providers have the ability to act as gatekeepers even in the absence of market power with respect to end users, we need not conduct a market power analysis.” (footnote 87)
Thus far, no administration of either party has sought to apply Regulatory Impact Analysis requirements to independent agencies. If administrations won’t, Congress should.
[I am participating in an online “debate” at the American Constitution Society with Professor Ben Edelman. The debate consists of an opening statement and concluding responses. Professor Edelman’s opening statement is here. I have also cross-posted the opening statement at Truthonthemarket and Tech Liberation Front. This is my closing statement, which is also cross-posted at Truthonthemarket.]
Professor Edelman’s opening post does little to support his case. Instead, it reflects the same retrograde antitrust I criticized in my first post.
Edelman’s understanding of antitrust law and economics appears firmly rooted in the 1960s approach to antitrust in which enforcement agencies, courts, and economists vigorously attacked novel business arrangements without regard to their impact on consumers. Judge Learned Hand’s infamous passage in the Alcoa decision comes to mind as an exemplar of antitrust’s bad old days when the antitrust laws demanded that successful firms forego opportunities to satisfy consumer demand. Hand wrote:
we can think of no more effective exclusion than progressively to embrace each new opportunity as it opened, and to face every newcomer with new capacity already geared into a great organization, having the advantage of experience, trade connections and the elite of personnel.
Antitrust has come a long way since then. By way of contrast, today’s antitrust analysis of alleged exclusionary conduct begins with (ironically enough) the U.S. v. Microsoft decision. Microsoft emphasizes the difficulty of distinguishing effective competition from exclusionary conduct; but it also firmly places “consumer welfare” as the lodestar of the modern approach to antitrust:
Continue reading →
Milton Mueller responded to my post Wednesday on the DOJ’s decision to halt the AT&T/T-Mobile merger by asserting that there was no evidence the merger would lead to “anything innovative and progressive” and claiming “[t]he spectrum argument fell apart months ago, as factual inquiries revealed that AT&T had more spectrum than Verizon and the mistakenly posted lawyer’s letter revealed that it would be much less expensive to expand its capacity than to acquire T-Mobile.” With respect to Milton, I think he’s been suckered by the “big is bad” crowd at Public Knowledge and Free Press. But he’s hardly alone and these claims — claims that may well have under-girded the DOJ’s decision to step in to some extent — merit thorough refutation.
To begin with, LTE is “progress” and “innovation” over 3G and other quasi-4G technologies. AT&T is attempting to make an enormous (and risky) investment in deploying LTE technology reliably and to almost everyone in the US–something T-Mobile certainly couldn’t do on its own and something AT&T would have been able to do only partially and over a longer time horizon and, presumably, at greater expense. Such investments are exactly the things that spur innovation across the ecosystem in the first place. No doubt AT&T’s success here would help drive the next big thing–just as quashing it will make the next big thing merely the next medium-sized thing.
The “Spectrum Argument”
The spectrum argument that Milton claims “fell apart months ago” is the real story here, the real driver of this merger, and the reason why the DOJ’s action yesterday is, indeed, a blow to progress. That argument, unfortunately, still stands firm. Even more, the irony is that to a significant extent the spectrum shortfall is a product of the government’s own making–through mismanagement of spectrum by the FCC, political dithering by Congress, and local government intransigence on tower siting and co-location–and the notion of the government now intervening here to “fix” one of the most significant private efforts to make progress despite these government impediments is really troubling.
Anyway, here’s what we know about spectrum: There isn’t enough of it in large enough blocks and in bands suitable for broadband deployment using available technology to fully satisfy current–let alone future–demand.
Continue reading →
Two data points in the news over the past 24 hours to consider:
- A new report on “Smartphone Adoption & Usage” by the Pew Internet Project finds that “one third of American adults – 35% – own smartphones” and that of that group “some 87% of smartphone owners access the Internet or email on their handheld” and “25% of smartphone owners say that they mostly go online using their phone, rather than with a computer.”
- According to the Wall Street Journal, the “Average iPhone Owner Will Download 83 Apps This Year.” That’s up from an average of 51 apps downloaded in 2010. (At first I was astonished when I read that, but then realized that I’ve probably downloaded an equal number of apps myself, albeit on an Android-based device.)
As I explain in my latest Forbes column, facts like these help us understand “How iPhones And Androids Ushered In A Smartphone Pricing Revolution.” That is, major wireless carriers are in the process of migrating from flat-rate, “all-you-can-eat” wireless data plans to usage-based plans. The reason is simple economics: data demand is exploding faster than data supply can keep up.
“It’s been four years since the introduction of the iPhone and rival devices that run Google’s Android software,” notes Cecilia Kang of The Washington Post. “In that time, the devices have turned much of America into an always-on, Internet-on-the-go society.” Indeed, but it’s not just the iPhone and Android smartphones. It’s all those tablets that have just come online over the past year, too. We are witnessing a tectonic shift in how humans consume media and information, and we are witnessing this revolution unfold over a very short time frame. Continue reading →
Of all the shockingly naive and shamelessly self-serving editorials I’ve read by businesspeople in recent years, today’s Wall Street Journal oped by Netflix general counsel David Hyman really takes the cake. It’s an implicit plea to policymakers for broadband price controls. Hyman doesn’t like the idea of broadband operators potentially pricing bandwidth according to usage /demand and he wants action taken to stop it. Of course, why wouldn’t he say that? It’s in Netflix’s best interest to ensure that somebody else besides them picks up the tab for increased broadband consumption!
But Hyman tries to pull a fast one on the reader and suggest that scarcity is an economic illusion and that any effort by broadband operators to migrate to usage-based pricing schemes is simply a nefarious, anti-consumer plot that must be foiled. “Consumers and regulators need to take heed of what is happening and avoid winding up like the proverbial frog in a pot of boiling water,” Hyman warns. “It’s time to jump before it’s too late.”
Rubbish! The only thing policymakers need to do is avoid myopic, misguided advice like Hyman’s, which isn’t based on one iota of economic theory or evidence.
Continue reading →
[Cross-Posted at Truthonthemarket.com]
I did not intend for this to become a series (Part I), but I underestimated the supply of analysis simultaneously invoking “search bias” as an antitrust concept while waving it about untethered from antitrust’s institutional commitment to protecting consumer welfare. Harvard Business School Professor Ben Edelman offers the latest iteration in this genre. We’ve criticized his claims regarding search bias and antitrust on precisely these grounds.
For those who have not been following the Google antitrust saga, Google’s critics allege Google’s algorithmic search results “favor” its own services and products over those of rivals in some indefinite, often unspecified, improper manner. In particular, Professor Edelman and others — including Google’s business rivals — have argued that Google’s “bias” discriminates most harshly against vertical search engine rivals, i.e. rivals offering search specialized search services. In framing the theory that “search bias” can be a form of anticompetitive exclusion, Edelman writes:
Search bias is a mechanism whereby Google can leverage its dominance in search, in order to achieve dominance in other sectors. So for example, if Google wants to be dominant in restaurant reviews, Google can adjust search results, so whenever you search for restaurants, you get a Google reviews page, instead of a Chowhound or Yelp page. That’s good for Google, but it might not be in users’ best interests, particularly if the other services have better information, since they’ve specialized in exactly this area and have been doing it for years.
I’ve wondered what model of antitrust-relevant conduct Professor Edelman, an economist, has in mind. It is certainly well known in both the theoretical and empirical antitrust economics literature that “bias” is neither necessary nor sufficient for a theory of consumer harm; further, it is fairly obvious as a matter of economics that vertical integration can be, and typically is, both efficient and pro-consumer. Still further, the bulk of economic theory and evidence on these contracts suggest that they are generally efficient and a normal part of the competitive process generating consumer benefits. Continue reading →
Last week the Senate Commerce Committee passed–with deep bi-partisan support–the Public Safety Spectrum and Wireless Innovation Act.
The bill, co-sponsored by Committee Chairman Jay Rockefeller and Ranking Member Kay Bailey Hutchison, is a comprehensive effort to resolve several long-standing stalemates and impending crises having to do with one of the most critical 21st century resources: radio spectrum.
My analysis of the bill appears today on CNET. See “Spectrum reform, public safety network move forward in Senate.”
The proposed legislation is impressive in scope; it offers new and in some cases novel solutions to more than half-a-dozen spectrum-related problems, including: Continue reading →
One of my favorite topics lately has been the challenges faced by information control regimes. Jerry Brito and I are writing a big paper on this issue right now. Part of the story we tell is that the sheer scale / volume of modern information flows is becoming so overwhelming that it raises practical questions about just how effective any info control regime can be. [See our recent essays on the topic: 1, 2, 3, 4, 5.] As we continue our research, we’ve been attempting to unearth some good metrics / factoids to help tell this story. It’s challenging because there aren’t many consistent data sets depicting online data growth over time and some of the best anecdotes from key digital companies are only released sporadically. Anyway, I’d love to hear from others about good metrics and data sets that we should be examining. In the meantime, here are a few fun facts I’ve unearthed in my research so far. Please let me know if more recent data is available. [Note: Last updated 7/18/11]
- Facebook: users submit around 650,000 comments on the 100 million pieces of content served up every minute on its site.[1] People on Facebook install 20 million applications every day.[2]
- YouTube: every minute, 48 hours of video were uploaded. According to Peter Kafka of The Wall Street Journal, “That’s up 37 percent in the last six months, and 100 percent in the last year. YouTube says the increase comes in part because it’s easier than ever to upload stuff, and in part because YouTube has started embracing lengthy live streaming sessions. YouTube users are now watching more than 3 billion videos a day. That’s up 50 percent from the last year, which is also a huge leap, though the growth rate has declined a bit: Last year, views doubled from a billion a day to two billion in six months.”[3]
- eBay is now the world’s largest online marketplace with more than 90 million active users globally and $60 billion in transactions annually, or $2,000 every second.[4]
- Google: 34,000 searches per second (2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month).[5]
- Twitter already has 300 million users producing 140 million Tweets a day, which adds up to a billion Tweets every 8 days[6] (@ 1,600 Tweets per second) “On the first day Twitter was made available to the public, 224 tweets were sent. Today, that number of updates are posted at least 10 times a second.”[7]
- Apple: more than 10 billion apps have been downloaded from its App Store by customers in over 77 countries.[8] According to Chris Burns of SlashGear, “Currently it appears that another thousand apps are downloaded every 9 seconds in the Android Marketplace while every 3 seconds another 1,000 apps are downloaded in the App Store.”
- Yelp: as of July 2011 the site hosted over 18 million user reviews.[9]
- Wikipedia: Every six weeks, there are 10 million edits made to Wikipedia.[10]
- “Humankind shared 65 exabytes of information in 2007, the equivalent of every person in the world sending out the contents of six newspapers every day.”[11]
- Researchers at the San Diego Supercomputer Center at the University of California, San Diego, estimate that, in 2008, the world’s 27 million business servers processed 9.57 zettabytes, or 9,570,000,000,000,000,000,000 bytes of information. This is “the digital equivalent of a 5.6-billion-mile-high stack of books from Earth to Neptune and back to Earth, repeated about 20 times a year.” The study also estimated that enterprise server workloads are doubling about every two years, “which means that by 2024 the world’s enterprise servers will annually process the digital equivalent of a stack of books extending more than 4.37 light-years to Alpha Centauri, our closest neighboring star system in the Milky Way Galaxy.”[12]
- According to Dave Evans, Cisco’s chief futurist and chief technologist for the Cisco Internet Business Solutions Group, about 5 exabytes of unique information were created in 2008. That’s 1 billion DVDs. Fast forward three years and we are creating 1.2 zettabytes, with one zettabyte equal to 1,024 exabytes. “This is the same as every person on Earth tweeting for 100 years, or 125 million years of your favorite one-hour TV show,” says Evans. Our love of high-definition video accounts for much of the increase. By Cisco’s count, 91% of Internet data in 2015 will be video.[13]
[12] Rex Graham, “Business Information Consumption: 9,570,000,000,000,000,000,000 Bytes per Year,” UC San Diego News Center, April 6, 2011, http://ucsdnews.ucsd.edu/newsrel/general/04-05BusinessInformation.asp.