February 2013

Politicians from both parties are now saying that although President Obama took comprehensive action on cybersecurity through executive order, we still need legislation. Over at TIME.com I write that no, we don’t.

Republicans want to protect businesses from suit for breach of contract or privacy statute violations in the name of information sharing, but there’s no good reason for such blanket immunity. Democrats would like to see mandated security standards, but top-down regulation is a bad idea, especially in such a fast-moving area. But as I write:

>Yet guided by their worst impulses – to extend protections to business, or to exert bureaucratic control – members of Congress will insist that it is imperative they get in on the action.

>If they do, they will undoubtedly be saddling us with a host of unintended consequences that we will come to regret later.

The executive order does most of what Congress failed to do in its last session. What Congress could add now is unnecessary and likely pernicious. The executive order should be given time to work. Only then will Congress now if and how it might need to be “strengthened.”

Ronald A. Cass, Dean Emeritus of Boston University School of Law, discusses his new book, Laws of Creation: Property Rights in the World of Ideas, which he co-authored with Boston University colleague Keith Hylton. Written as a primer for understanding intellectual property law and a defense of intellectual property, Laws of Creation explains the basis of IP and its justification. 

According to Cass, not all would-be reformers share a similar guiding philosophy, distinguishing between those who support property rights but nevertheless have specific critiques of the intellectual property system as it currently stands, and reformers who do not see a place for property.

Cass explains that the current intellectual property system is neither wholly good nor wholly bad, but is a matter of weighing tradeoffs. On the whole, he argues, intellectual property benefits society. Cass also argues that intellectual property law in the U.S. is still more functional than that in other countries, such as Italy, and that, while it would benefit from some reform, it is fundamentally a workable system.

Download

Related Links

In the upcoming issue of Harvard Business Review, my colleague Paul Nunes at Accenture’s Institute for High Performance and I are publishing the first of many articles from an on-going research project on what we are calling “Big Bang Disruption.”

The project is looking at the emerging ecosystem for innovation based on disruptive technologies.  It expands on work we have done separately and now together over the last fifteen years.

Our chief finding is that the nature of innovation has changed dramatically, calling into question much of the conventional wisdom on business strategy and competition, especially in information-intensive industries–which is to say, these days, every industry.

The drivers of this new ecosystem are ever-cheaper, faster, and smaller computing devices, cloud-based virtualization, crowdsourced financing, collaborative development and marketing, and the proliferation of mobile everything.  There will soon be more smartphones sold than there are people in the world.  And before long, each of over one trillion items in commerce will be added to the network.

The result is that new innovations now enter the market cheaper, better, and more customizable than products and services they challenge.  (For example, smartphone-based navigation apps versus standalone GPS devices.)  In the strategy literature, such innovation would be characterized as thoroughly “undiscplined.”  It shouldn’t succeed.  But it does. Continue reading →

I’m excited to announce that the Minnesota Journal of Law, Science & Technology has just published the final version of my 78-page paper on, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” My thanks to the excellent team at the Journal, who made the final product a much better paper than the one I turned into them! I poured my heart and soul into this article and hope others find it useful. It’s the culmination of all my work on technopanics and threat inflation in information policy debates, much of which I originally developed here in various essays through the years. In coming weeks, I hope to elaborate on themes I develop in the paper in a couple of posts here.

The paper can be found on the Minn. J. L. Sci. & Tech. website or on SSRN. I’ve also embedded it below in a Scribd reader. Here’s the executive summary: Continue reading →

Christopher S. Yoo, the John H. Chestnut Professor of Law, Communication, and Computer & Information Science at the University of Pennsylvania and author of the new book, The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network, explains that the Internet that we knew in its early days—one with a client-server approach, with a small number of expert users, and a limited set of applications and business cases—has radically changed, and so it may be that the architecture underlying the internet may as well.

According to Yoo, the internet we use today barely resembles the original Defense Department and academic network from which it emerged. The applications that dominated the early Internet—e-mail and web browsing—have been joined by new applications such as video and cloud computing, which place much greater demands on the network. Wireless broadband and fiber optics have emerged as important alternatives to transmission services provided via legacy telephone and cable television systems, and mobile devices are replacing personal computers as the dominant means for accessing the Internet. At the same time, the networks comprising the Internet are interconnecting through a wider variety of locations and economic terms than ever before.

These changes are placing pressure on the Internet’s architecture to evolve in response, Yoo says. The Internet is becoming less standardized, more subject to formal governance, and more reliant on intelligence located in the core of the network. At the same time, Internet pricing is becoming more complex, intermediaries are playing increasingly important roles, and the maturation of the industry is causing the nature of competition to change. Moreover, the total convergence of all forms of communications into a single network predicted by many observers may turn out to be something of a myth. Policymakers, Yoo says, should allow room for this natural evolution of the network to take place.

Download

Related Links

Congress recently mandated that the Federal Communications Commission (FCC) make additional spectrum available through a novel incentive auction designed to transition television broadcast spectrum to mobile use. The FCC’s task is to adequately compensate television broadcasters for relinquishing their spectrum while ensuring such spectrum is rapidly transitioned to mobile uses that benefit consumers nationwide.

This will be the most challenging and complex auction design the FCC has ever attempted. The FCC cannot avoid the complexity inherent in this unique auction design, but it can emphasize simplicity and exercise restraint when considering the other service rules that will govern this new spectrum. To maximize its opportunity for success in this daunting circumstance, the FCC should leverage proven policies wherever possible.

Successful spectrum policies are critical to sustaining innovation, economic growth, and global competitiveness in the mobile era. Today, consumer demand for tablets and smartphones is straining the limits of mobile Internet capacity, which is threatening our nation’s lead in mobile innovation. The quickest and least costly way to increase mobile network capacity is to add spectrum, and the incentive auction is how the FCC intends to bolster our spectrum resources. The continued success of the mobile Internet thus depends on the success of the incentive auction, and the auction’s success depends on policy decisions that must be made by the FCC. Continue reading →

Today marks the seventeenth birthday of the Telecommunications Act of 1996. Since it became law nearly two decades ago, the 1996 Act has largely succeeded in meeting its principal goals. Ironically, its success is becoming its potential failure.

By the time most teenagers turn seventeen, they have already begun planning their future after high school. Their primary school achievements are only a beginning in a lifetime of future possibilities. For most legislation, however, there is no future after the initial goals of Congress are achieved. Fortunately, the seventeen year-old 1996 Act isn’t like most legislation.

Congress recognized that when the goals of the 1996 Act were achieved, many of its regulations would no longer be necessary. In its wisdom, Congress provided the FCC with statutory authority to adapt our communications laws to future changes in the communications market. This authority includes the ability for the FCC to forbear from applying an unnecessary or outdated law.

Unfortunately, the FCC has been very reluctant to exercise this authority. It has instead preferred to remain within the familiar walls of stagnant regulations while the opportunity of Internet transformation knocks on the door. If the FCC refuses to use its forbearance authority, the only future for the 1996 Act is to live in the proverbial parents’ basement and eat 20th Century leftovers. If the FCC instead chooses to act, it could accelerate investment in new broadband infrastructure and the transition to an all-Internet future. Continue reading →

Via a Twitter post this morning, privacy lawyer Stephen Kline (@steph3n) brings to my attention this new California bill that “would require the privacy policy [of a commercial Web site or online
service] to be no more than 100 words, be written in clear and concise language, be written at no greater than an 8th grade reading level, and to include a statement indicating whether the personally identifiable information may be sold or shared with others, and if so, how and with whom the information may be shared.”

I’ve always been interested in efforts — both on the online safety and digital privacy fronts — to push for “simplified” disclosure policies and empowerment tools. Generally speaking, increased notice and simplified transparency in these and others contexts is a good norm that companies should be following. However, as I point out in a forthcoming law review article in the Harvard Journal of Law & Public Policy, we need to ask ourselves whether the highly litigious nature of America’s legal culture will allow for truly “simplified” privacy policies. As I note in the article, by its very nature, “simplification” likely entails less specificity about the legal duties and obligations of either party. Consequently, some companies will rightly fear that a move toward more simplified privacy policies could open them up to greater legal liability. If policymakers persist in the effort to force the simplification of privacy policies, therefore, they may need to extend some sort of safe harbor provision to site operators for a clearly worded privacy policy that is later subject to litigation because of its lack of specificity. If not, site operators will find themselves in a “damned if you do, damned if you don’t” position: Satisfying regulators’ desire for simplicity will open them up to attacks by those eager to exploit the lack of specificity inherent in a simplified privacy policy.

Another issue to consider comes down to simple bureaucratic sloth: Continue reading →

Brookings has a new report out by Jonathan Rothwell, José Lobo, Deborah Strumsky, and Mark Muro that “examines the importance of patents as a measure of invention to economic growth and explores why some areas are more inventive than others.” (p. 4) Since I doubt that non-molecule patents have a substantial effect on growth, I was curious to examine the paper’s methodology. So I skimmed through the study, which referred me to a technical appendix, which referred me to the authors’ working paper on SSRN.

The authors are basically regressing log output per worker on 10-year-lagged measures of patenting in a fixed effects model using metropolitan areas in the United States.

Continue reading →

Jerry Brito and WCITLeaks co-creator Eli Dourado have a conversation about the recent World Conference on International Telecommunications (WCIT), a UN treaty conference that delved into questions of Internet governance.

In the lead-up to WCIT—which was convened to review the International Telecommunication Regulations (ITRs)—access to preparatory reports and proposed modifications to the ITRs was limited to International Telecommunications Union (ITU) member states and a few other privileged parties. Internet freedom advocates worried that the member states would use WCIT as an opportunity to exert control over the Internet. Frustrated by the lack of transparency, Brito and Dourado created WCITLeaks.org, which publishes leaked ITU documents from anonymous sources.

In December, Dourado traveled to Dubai as a member of the U.S. delegation and got an insider’s view of the politics behind international telecommunications policy. Dourado shares his experiences of the conference, what its failure means for the future of Internet freedom, and why the ITU is not as neutral as it claims.

Download

Related Links