In the upcoming issue of Harvard Business Review, my colleague Paul Nunes at Accenture’s Institute for High Performance and I are publishing the first of many articles from an on-going research project on what we are calling “Big Bang Disruption.”

The project is looking at the emerging ecosystem for innovation based on disruptive technologies.  It expands on work we have done separately and now together over the last fifteen years.

Our chief finding is that the nature of innovation has changed dramatically, calling into question much of the conventional wisdom on business strategy and competition, especially in information-intensive industries–which is to say, these days, every industry.

The drivers of this new ecosystem are ever-cheaper, faster, and smaller computing devices, cloud-based virtualization, crowdsourced financing, collaborative development and marketing, and the proliferation of mobile everything.  There will soon be more smartphones sold than there are people in the world.  And before long, each of over one trillion items in commerce will be added to the network.

The result is that new innovations now enter the market cheaper, better, and more customizable than products and services they challenge.  (For example, smartphone-based navigation apps versus standalone GPS devices.)  In the strategy literature, such innovation would be characterized as thoroughly “undiscplined.”  It shouldn’t succeed.  But it does. Continue reading →

I’m excited to announce that the Minnesota Journal of Law, Science & Technology has just published the final version of my 78-page paper on, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” My thanks to the excellent team at the Journal, who made the final product a much better paper than the one I turned into them! I poured my heart and soul into this article and hope others find it useful. It’s the culmination of all my work on technopanics and threat inflation in information policy debates, much of which I originally developed here in various essays through the years. In coming weeks, I hope to elaborate on themes I develop in the paper in a couple of posts here.

The paper can be found on the Minn. J. L. Sci. & Tech. website or on SSRN. I’ve also embedded it below in a Scribd reader. Here’s the executive summary: Continue reading →

Christopher S. Yoo, the John H. Chestnut Professor of Law, Communication, and Computer & Information Science at the University of Pennsylvania and author of the new book, The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network, explains that the Internet that we knew in its early days—one with a client-server approach, with a small number of expert users, and a limited set of applications and business cases—has radically changed, and so it may be that the architecture underlying the internet may as well.

According to Yoo, the internet we use today barely resembles the original Defense Department and academic network from which it emerged. The applications that dominated the early Internet—e-mail and web browsing—have been joined by new applications such as video and cloud computing, which place much greater demands on the network. Wireless broadband and fiber optics have emerged as important alternatives to transmission services provided via legacy telephone and cable television systems, and mobile devices are replacing personal computers as the dominant means for accessing the Internet. At the same time, the networks comprising the Internet are interconnecting through a wider variety of locations and economic terms than ever before.

These changes are placing pressure on the Internet’s architecture to evolve in response, Yoo says. The Internet is becoming less standardized, more subject to formal governance, and more reliant on intelligence located in the core of the network. At the same time, Internet pricing is becoming more complex, intermediaries are playing increasingly important roles, and the maturation of the industry is causing the nature of competition to change. Moreover, the total convergence of all forms of communications into a single network predicted by many observers may turn out to be something of a myth. Policymakers, Yoo says, should allow room for this natural evolution of the network to take place.

Download

Related Links

Congress recently mandated that the Federal Communications Commission (FCC) make additional spectrum available through a novel incentive auction designed to transition television broadcast spectrum to mobile use. The FCC’s task is to adequately compensate television broadcasters for relinquishing their spectrum while ensuring such spectrum is rapidly transitioned to mobile uses that benefit consumers nationwide.

This will be the most challenging and complex auction design the FCC has ever attempted. The FCC cannot avoid the complexity inherent in this unique auction design, but it can emphasize simplicity and exercise restraint when considering the other service rules that will govern this new spectrum. To maximize its opportunity for success in this daunting circumstance, the FCC should leverage proven policies wherever possible.

Successful spectrum policies are critical to sustaining innovation, economic growth, and global competitiveness in the mobile era. Today, consumer demand for tablets and smartphones is straining the limits of mobile Internet capacity, which is threatening our nation’s lead in mobile innovation. The quickest and least costly way to increase mobile network capacity is to add spectrum, and the incentive auction is how the FCC intends to bolster our spectrum resources. The continued success of the mobile Internet thus depends on the success of the incentive auction, and the auction’s success depends on policy decisions that must be made by the FCC. Continue reading →

Today marks the seventeenth birthday of the Telecommunications Act of 1996. Since it became law nearly two decades ago, the 1996 Act has largely succeeded in meeting its principal goals. Ironically, its success is becoming its potential failure.

By the time most teenagers turn seventeen, they have already begun planning their future after high school. Their primary school achievements are only a beginning in a lifetime of future possibilities. For most legislation, however, there is no future after the initial goals of Congress are achieved. Fortunately, the seventeen year-old 1996 Act isn’t like most legislation.

Congress recognized that when the goals of the 1996 Act were achieved, many of its regulations would no longer be necessary. In its wisdom, Congress provided the FCC with statutory authority to adapt our communications laws to future changes in the communications market. This authority includes the ability for the FCC to forbear from applying an unnecessary or outdated law.

Unfortunately, the FCC has been very reluctant to exercise this authority. It has instead preferred to remain within the familiar walls of stagnant regulations while the opportunity of Internet transformation knocks on the door. If the FCC refuses to use its forbearance authority, the only future for the 1996 Act is to live in the proverbial parents’ basement and eat 20th Century leftovers. If the FCC instead chooses to act, it could accelerate investment in new broadband infrastructure and the transition to an all-Internet future. Continue reading →

Via a Twitter post this morning, privacy lawyer Stephen Kline (@steph3n) brings to my attention this new California bill that “would require the privacy policy [of a commercial Web site or online
service] to be no more than 100 words, be written in clear and concise language, be written at no greater than an 8th grade reading level, and to include a statement indicating whether the personally identifiable information may be sold or shared with others, and if so, how and with whom the information may be shared.”

I’ve always been interested in efforts — both on the online safety and digital privacy fronts — to push for “simplified” disclosure policies and empowerment tools. Generally speaking, increased notice and simplified transparency in these and others contexts is a good norm that companies should be following. However, as I point out in a forthcoming law review article in the Harvard Journal of Law & Public Policy, we need to ask ourselves whether the highly litigious nature of America’s legal culture will allow for truly “simplified” privacy policies. As I note in the article, by its very nature, “simplification” likely entails less specificity about the legal duties and obligations of either party. Consequently, some companies will rightly fear that a move toward more simplified privacy policies could open them up to greater legal liability. If policymakers persist in the effort to force the simplification of privacy policies, therefore, they may need to extend some sort of safe harbor provision to site operators for a clearly worded privacy policy that is later subject to litigation because of its lack of specificity. If not, site operators will find themselves in a “damned if you do, damned if you don’t” position: Satisfying regulators’ desire for simplicity will open them up to attacks by those eager to exploit the lack of specificity inherent in a simplified privacy policy.

Another issue to consider comes down to simple bureaucratic sloth: Continue reading →

Brookings has a new report out by Jonathan Rothwell, José Lobo, Deborah Strumsky, and Mark Muro that “examines the importance of patents as a measure of invention to economic growth and explores why some areas are more inventive than others.” (p. 4) Since I doubt that non-molecule patents have a substantial effect on growth, I was curious to examine the paper’s methodology. So I skimmed through the study, which referred me to a technical appendix, which referred me to the authors’ working paper on SSRN.

The authors are basically regressing log output per worker on 10-year-lagged measures of patenting in a fixed effects model using metropolitan areas in the United States.

Continue reading →

Jerry Brito and WCITLeaks co-creator Eli Dourado have a conversation about the recent World Conference on International Telecommunications (WCIT), a UN treaty conference that delved into questions of Internet governance.

In the lead-up to WCIT—which was convened to review the International Telecommunication Regulations (ITRs)—access to preparatory reports and proposed modifications to the ITRs was limited to International Telecommunications Union (ITU) member states and a few other privileged parties. Internet freedom advocates worried that the member states would use WCIT as an opportunity to exert control over the Internet. Frustrated by the lack of transparency, Brito and Dourado created WCITLeaks.org, which publishes leaked ITU documents from anonymous sources.

In December, Dourado traveled to Dubai as a member of the U.S. delegation and got an insider’s view of the politics behind international telecommunications policy. Dourado shares his experiences of the conference, what its failure means for the future of Internet freedom, and why the ITU is not as neutral as it claims.

Download

Related Links

The D.C. tech world is abuzz today over a front page story in *The Washington Post* by Cecilia Kang announcing an exciting new plan from the FCC “to create super WiFi networks across the nation, so powerful and broad in reach that consumers could use them to make calls or surf the Internet without paying a cellphone bill every month.”

“Designed by FCC Chairman Julius Genachowski,” Kang explains “the plan would be a global first.” And that’s not all: “If all goes as planned, free access to the Web would be available in just about every metropolitan area and in many rural areas.” Wow. Nationwide internet access for all and at no charge?!

Aggregators have run with this amazing news, re-reporting Kang’s amazing scoop. Here’s Mashable:

>The proposal, first reported by The Washington Post, would require local television stations and broadcasters to sell wireless spectrum to the government. The government would then use that spectrum to build public Wi-Fi networks.

And here’s Business Insider:

>The Federal Communications Commission wants to create so-called “super WiFi” networks all over the United States, sending the $178 billion telecom industry scrambling, The Washington Post‘s Cecilia Kang reports. … Under the proposal, the FCC would provide free, baseline WiFi access in “just about every metropolitan area and in many rural areas” using the same air wave frequencies that empower AM radio and the broadcast television spectrum.

Free Wi-Fi networks, folks! Wow, what an amazing new plan. But, wait a minute. Who is going to pay for these free nationwide networks? They’ve got to be built after all. Hmmm. It doesn’t seem like the article really explains that part. The cool thing about living in the future, though, is that you can just ask for clarification. So, DSLReport’s Karl Bode asked Kang: Continue reading →

I finally got around to reading this interesting little paper by Justus Haucap and Ulrich Heimeshoff published by the Düsseldorf Institute for Competition Economics entitled, “Google, Facebook, Amazon, eBay: Is the Internet Driving Competition or Market Monopolization?”  It offers a nice snapshot of the current state of play in several online sectors and surveys much of the relevant economic literature on the issue of antitrust and information technology markets. The authors also familiarize readers with the basic economic concepts that are hotly debated in the field of digital economics, including: network effects, switching costs, multi-homing, and economies of scale.

What I particularly like about their paper is that it struggles with the two competing narratives that dominate debates over digital age economics. Here’s how Haucap and Heimeshoff put it in the introduction:

On the one hand, it is rather obvious that many very successful Internet-based companies are nearly monopolists. Google, Youtube, Facebook, and Skype are typical examples for Internet firms who dominate their relevant markets and who leave only limited space for a relatively small competitive fringe. Furthermore, most of these providers do not generate content themselves, but “only” provide access to different content on the Internet. On the other hand, the crucial question from a competition policy perspective is not so much whether these firms have such a dominant position today, but rather why they have such a large market share and whether this is a temporary or non-temporary phenomenon. Do these Internet monopolies enjoy a dominant position because they are protected from competition though barriers to entry or do they just enjoy the profits of superior technology and innovation? Are we observing some sort of Schumpeterian competition where one temporary monopoly is followed by another, with innovation as the driving competitive force, or are we dealing with monopoly firms that mainly try to foreclose their markets through anticompetitive behavior?

Faithful readers know from my past rantings here on this blog, in Forbes columns, and in various working papers, that I am firmly in the latter (“Schumpeterian competition”) camp. Continue reading →