Technology Liberation Front http://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Fri, 15 Aug 2014 15:22:01 +0000 en-US hourly 1 Comments to the New York Department of Financial Services on the Proposed Virtual Currency Regulatory Framework http://techliberation.com/2014/08/14/comments-to-the-new-york-department-of-financial-services-on-the-proposed-virtual-currency-regulatory-framework/ http://techliberation.com/2014/08/14/comments-to-the-new-york-department-of-financial-services-on-the-proposed-virtual-currency-regulatory-framework/#comments Thu, 14 Aug 2014 14:48:37 +0000 http://techliberation.com/?p=74697

Today my colleague Eli Dourado and I have filed a public interest comment with the New York Department of Financial Services on their proposed “BitLicense” regulatory framework for digital currencies. You can read it here. As we say in the comment, NYDFS is on the right track, but ultimately misses the mark:

State financial regulators around the country have been working to apply their existing money transmission licensing statutes and regulations to new virtual currency businesses. In many cases, existing rules do not take into account the unique properties of recent innovations like cryptocurrencies. With this in mind, the department sought to develop rules that were “tailored specifically to the unique characteristics of virtual currencies.”

As Superintendent Benjamin Lawsky has stated, the aim of this project is “to strike an appropriate balance that helps protect consumers and root out illegal activity—without stifling beneficial innovation.” This is the right goal and one we applaud. It is a very difficult balance to strike, however, and we believe that the BitLicense regulatory framework as presently proposed misses the mark, for two main reasons.

First, while doing much to take into account the unique properties of virtual currencies and virtual currency businesses, the proposal nevertheless fails to accommodate some of the most important attributes of software-based innovation. To the extent that one of its chief goals is to preserve and encourage innovation, the BitLicense proposal should be modified with these considerations in mind—and this can be done without sacrificing the protections that the rules will afford consumers. Taking into account the “unique characteristics” of virtual cur-rencies is the key consideration that will foster innovation, and it is the reason why the department is creating a new BitLicense. The department should, therefore, make sure that it is indeed taking these features into account.

Second, the purpose of a BitLicense should be to take the place of a money transmission license for virtual currency businesses. That is to say, but for the creation of a new BitLicense, virtual currency businesses would be subject to money transmission licensing. Therefore, to the extent that the goal behind the new BitLicense is to protect consumers while fostering innovation, the obligations faced by BitLicensees should not be any more burdensome than those faced by traditional money transmitters. Otherwise, the new regulatory framework will have the opposite effect of the one intended. If it is more costly and difficult to acquire a BitLicense than a money transmission license, we should expect less innovation. Additional regulatory burdens would put BitLicensees at a relative disadvantage, and in several instances the proposed regulatory framework is more onerous than traditional money transmitter licensing.

As Superintendent Lawsky has rightly stated, New York should avoid virtual currency rules that are “so burdensome or unwieldy that the technology can’t develop.” The proposed BitLicense framework, while close, does not strike the right balance between consumer protection and innovation. For example, its approach to consumer protection through disclosures rather than prescriptive precautionary regulation is the right approach for giving entrepreneurs flexibility to innovate while ensuring that consumers have the information they need to make informed choices. Yet there is much that can be improved in the framework to reach the goal of balancing innovation and protection. Below we outline where the framework is missing the mark and recommend some modifications that will take into account the unique properties of virtual currencies and virtual currency businesses.

We hope this comment will be helpful to the department as it further develops its proposed framework, and we hope that it will publish a revised draft of the framework and solicit a second round of comments so that we can make sure we all get it right. And it’s important that we get it right.

Other jurisdictions, such as London, are looking to become the “global centre of financial innovation,” as Chancellor George Osborne put it in a recent speech about Bitcoin. If New York drops the ball, London may just pick it up. As Garrick Hileman, economic historian at the London School of Economics, told CNet last week:

The chancellor is no doubt aware that very little of the $250 million of venture capital which has been invested in Bitcoin startups to date has gone to British-based companies. Many people believe Bitcoin will be as big as the Internet. Today’s announcement from the chancellor has the potential to be a big win for the UK economy. The bottom line on today’s announcement is that Osborne thinks he’s spotted an opportunity for the City and Silicon Roundabout to siphon investment and jobs away from the US and other markets which are taking a more aggressive Bitcoin regulatory posture.

Let’s get it right.

]]>
http://techliberation.com/2014/08/14/comments-to-the-new-york-department-of-financial-services-on-the-proposed-virtual-currency-regulatory-framework/feed/ 0
Study: No, US Broadband is not Falling Behind http://techliberation.com/2014/08/13/us-broadband-is-not-falling-behind/ http://techliberation.com/2014/08/13/us-broadband-is-not-falling-behind/#comments Wed, 13 Aug 2014 16:25:08 +0000 http://techliberation.com/?p=74689

There’s a small but influential number of tech reporters and scholars who seem to delight in making the US sound like a broadband and technology backwater. A new Mercatus working paper by Roslyn Layton, a PhD fellow at a research center at Aalborg University, and Michael Horney a researcher at the Free State Foundation, counter that narrative and highlight data from several studies that show the US is at or near the top in important broadband categories.

For example, per Pew and ITU data, the vast majority of Americans use the Internet and the US is second in the world in data consumption per capita, trailing only South Korea. Pew reveals that for those who are not online the leading reasons are lack of usability and the Internet’s perceived lack of benefits. High cost, notably, is not the primary reason for infrequent use.

I’ve noted before some of the methodological problems in studies claiming the US has unusually high broadband prices. In what I consider their biggest contribution to the literature, Layton and Horney highlight another broadband cost frequently omitted in international comparisons: the mandatory media license fees many nations impose on broadband and television subscribers.

These fees can add as much as $44 to the monthly cost of broadband. When these fees are included in comparisons, American prices are frequently an even better value. In two-thirds of European countries and half of Asian countries, households pay a media license fee on top of the subscription fees to use devices such as connected computers and TVs.

…When calculating the real cost of international broadband prices, one needs to take into account media license fees, taxation, and subsidies. …[T]hese inputs can materially affect the cost of broadband, especially in countries where broadband is subject to value-added taxes as high as 27 percent, not to mention media license fees of hundreds of dollars per year.

US broadband providers, the authors point out, have priced broadband relatively efficiently for heterogenous uses–there are low-cost, low-bandwidth connections available as well as more expensive, higher-quality connections for intensive users.

Further, the US is well-positioned for future broadband use. Unlike many wealthy countries, Americans typically have access, at least, to broadband from telephone companies (like AT&T DSL or UVerse) as well as from a local cable provider. Competition between ISPs has meant steady investment in network upgrades, despite the 2008 global recession. The story is very different in much of Europe, where broadband investment, as a percentage of the global total, has fallen noticeably in recent years. US wireless broadband is also a bright spot: 97% of Americans can subscribe to 4G LTE while only 26% in the EU have access (which partially explains, by the way, why Europeans often pay less for mobile subscriptions–they’re using an inferior product).

There’s a lot to praise in the study and it’s necessary reading for anyone looking to understand how US broadband policy compares to other nations’. The fashionable arguments that the US is at risk of falling behind technologically were never convincing–the US is THE place to be if you’re a tech company or startup, for one–but Layton and Horney show the vulnerability of that narrative with data and rigor.

]]>
http://techliberation.com/2014/08/13/us-broadband-is-not-falling-behind/feed/ 0
Is STELA the Vehicle for Video Reform? http://techliberation.com/2014/08/08/is-stela-the-vehicle-for-video-reform/ http://techliberation.com/2014/08/08/is-stela-the-vehicle-for-video-reform/#comments Fri, 08 Aug 2014 18:21:31 +0000 http://techliberation.com/?p=74674

Even though few things are getting passed this Congress, the pressure is on to reauthorize the Satellite Television Extension and Localism Act (STELA) before it expires at the end of this year. Unsurprisingly, many have hoped this “must pass bill” will be the vehicle for broader reform of video. Getting video law right is important for our content rich world, but the discussion needs to expand much further than STELA.

Over at the American Action Forum, I explore a bit of what would be needed, and just how far the problems are rooted:

The Federal Communications Commission’s (FCC) efforts to spark localism and diversity of voices in broadcasting stands in stark contrast to relative lack of regulation governing non-broadcast content providers like Netflix and HBO, which have revolutionized delivery and upped the demand for quality content. These amorphous social goals also have limited broadcasters. Without any consideration for the competitive balance in a local market, broadcasters are barred in what they can own, are saddled with various programming restrictions, and are subject to countless limitations in the use of their spectrum. Moreover, the FCC has sought to outlaw deals between broadcasters who negotiate jointly for services and ads.

In the effort to support specific “public interest” goals, the FCC has implemented certain regulations which have cabined both broadcasters and paid TV distributors. In turn, these regulations forced companies to develop in proscribed ways, and in turn prompted further regulatory action when they have tried to innovate. Speaking about this cat-and-mouse game in the financial sector, Professor Edward Kane termed the relationship, the “regulatory dialectic.”

But unwrapping the regulatory dialectic in video law will require a vehicle far more expansive than STELA. Ultimately, I conclude,

Both the quality of programming and the means of accessing it have undergone dramatic changes in the past two decades but the regulations have not. Consumer preferences and choices are shifting, which needs to be met by alterations in the regulatory regime. STELA is one part of the puzzle, but like so many other areas of telecommunication law, a comprehensive look at the body of laws ruling video is needed. It is increasingly clear that the laws governing programming must be updated to meet the 21st century marketplace.

On this site especially, there has been a vigorous debate on just what this framework would entail. For a more comprehensive look, check out:

  • Geoffrey Manne’s testimony on STELA before the House of Representatives’ Energy and Commerce;
  • Adam Thierer’s and Brent Skorup’s paper on video law entitled, “Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals”;
  • Ryan Radia’s blog post entitled, “A Free Market Defense of Retransmission Consent”;
  • Fred Campbell’s white paper on the “Future of Broadcast Television,” as well as his various posts on the subject;
  • And Hance Hanley’s posts on video law.
]]>
http://techliberation.com/2014/08/08/is-stela-the-vehicle-for-video-reform/feed/ 0
You know how IP creates millions of jobs? That’s pseudoscientific baloney http://techliberation.com/2014/08/06/you-know-how-ip-creates-millions-of-jobs-thats-pseudoscientific-baloney/ http://techliberation.com/2014/08/06/you-know-how-ip-creates-millions-of-jobs-thats-pseudoscientific-baloney/#comments Wed, 06 Aug 2014 14:26:56 +0000 http://techliberation.com/?p=74678

In 2012, the US Chamber of Commerce put out a report claiming that intellectual property is responsible for 55 million US jobs—46 percent of private sector employment. This is a ridiculous statistic if you merely stop and think about it for a minute. But the fact that the statistic is ridiculous doesn’t mean that it won’t continue to circulate around Washington. For example, last year Rep. Marsha Blackburn cited it uncritically in an oped in The Hill.

In a new paper from Mercatus (here’s the PDF), Ian Robinson and I expose this statistic, and others like them, as pseudoscience. They are based on incredibly shoddy and misleading reasoning. Here’s the abstract of the paper:

In the past two years, a spate of misleading reports on intellectual property has sought to convince policymakers and the public that implausibly high proportions of US output and employment depend on expansive intellectual property (IP) rights. These reports provide no theoretical or empirical evidence to support such a claim, but instead simply assume that the existence of intellectual property in an industry creates the jobs in that industry. We dispute the assumption that jobs in IP-intensive industries are necessarily IP-created jobs. We first explore issues regarding job creation and the economic efficiency of IP that cut across all kinds of intellectual property. We then take a closer look at these issues across three major forms of intellectual property: trademarks, patents, and copyrights.

As they say, read the whole thing, and please share with your favorite IP maximalist.

]]>
http://techliberation.com/2014/08/06/you-know-how-ip-creates-millions-of-jobs-thats-pseudoscientific-baloney/feed/ 0
New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts. http://techliberation.com/2014/07/17/new-yorks-financial-regulator-releases-a-draft-of-bitlicense-for-bitcoin-businesses-here-are-my-initial-thoughts/ http://techliberation.com/2014/07/17/new-yorks-financial-regulator-releases-a-draft-of-bitlicense-for-bitcoin-businesses-here-are-my-initial-thoughts/#comments Thu, 17 Jul 2014 17:56:26 +0000 http://techliberation.com/?p=74664

Today the New York Department of Financial Services released a proposed framework for licensing and regulating virtual currency businesses. Their “BitLicense” proposal [PDF] is the culmination of a yearlong process that included widely publicizes hearings.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.

That said, I’m glad DFS will be accepting comments on the proposed framework because there are a few things that can probably be improved or clarified. For example:

  1. Licensees would be required to maintain “the identity and physical addresses of the parties involved” in “all transactions involving the payment, receipt, exchange or conversion, purchase, sale, transfer, or transmission of Virtual Currency.” That seems a bit onerous and unworkable.

    Today, if you have a wallet account with Coinbase, the company collects and keeps your identity information. Under New York’s proposal, however, they would also be required to collect the identity information of anyone you send bitcoins to, and anyone that sends bitcoins to you (which might be technically impossible). That means identifying every food truck you visit, and every alpaca sock merchant you buy from online.

    The same would apply to merchant service companies like BitPay. Today they identify their merchant account holders–say a coffee shop–but under the proposed framework they would also have to identify all of their merchants’ customers–i.e. everyone who buys a cup of coffee. Not only is this potentially unworkable, but it also would undermine some of Bitcoin’s most important benefits. For example, the ability to trade across borders, especially with those in developing countries who don’t have access to electronic payment systems, is one of Bitcoin’s greatest advantages and it could be seriously hampered by such a requirement.

    The rationale for creating a new “BitLicense” specific to virtual currencies was to design something that took the special characteristics of virtual currencies into account (something existing money transmission rules didn’t do). I hope the rule can be modified so that it can come closer to that ideal.

  2. The definition of who is engaged in “virtual currency business activity,” and thus subject to the licensing requirement, is quite broad. It has the potential to swallow up online wallet services, like Blockchain, who are merely providing software to their customers rather than administering custodial accounts. It might potentially also include non-financial services like Proof of Existence, which provides a notary service on top of the Bitcoin block chain. Ditto for other services, perhaps like NameCoin, that use cryptocurrency tokens to track assets like domain names.

  3. The rules would also require a license of anyone “controlling, administering, or issuing a Virtual Currency.” While I take this to apply to centralized virtual currencies, some might interpret it to also mean that you must acquire a license before you can deploy a new decentralized altcoin. That should be clarified.

In order to grow and reach its full potential, the Bitcoin ecosystem needs regulatory certainty from dozens of states. New York is taking a leading role in developing that a regulatory structure and the path it chooses will likely influence other states. This is why we have to make sure that New York gets it right. They are on the right track and I look forward to engaging in the comment process to help them get all the way there.

]]>
http://techliberation.com/2014/07/17/new-yorks-financial-regulator-releases-a-draft-of-bitlicense-for-bitcoin-businesses-here-are-my-initial-thoughts/feed/ 4
SCOTUS Rules in Favor of Freedom and Privacy in Key Rulings http://techliberation.com/2014/06/26/scotus-rules-in-favor-of-freedom-and-privacy-in-key-rulings/ http://techliberation.com/2014/06/26/scotus-rules-in-favor-of-freedom-and-privacy-in-key-rulings/#comments Thu, 26 Jun 2014 07:36:08 +0000 http://techliberation.com/?p=74644

Yesterday, June 25, 2014, the U.S. Supreme Court issued two important opinions that advance free markets and free people in Riley v. California and ABC v. AereoI’ll soon have more to say about the latter case, Aereo, in which my organization filed a amicus brief along with the International Center for Law and Economics. But for now, I’d like to praise the Court for reaching the right result in a duo of cases involving police warrantlessly searching cell phones incident to lawful arrests.

Back in 2011, when I wrote in a feature story in Ars Technica—which I discussed on these pages—police in many jurisdictions were free to search the cell phones of individuals incident to their arrest. If you were arrested for a minor traffic violation, for instance, the unencrypted contents of your cell phone were often fair game for searches by police officers.

Now, however, thanks to the Supreme Court, police may not search an arrestee’s cell phone incident to her or his arrest—without specific evidence giving rise to an exigency that justifies such a search. Given the broad scope of offenses for which police may arrest someone, this holding has important implications for individual liberty, especially in jurisdictions where police often exercise their search powers broadly.

 

]]>
http://techliberation.com/2014/06/26/scotus-rules-in-favor-of-freedom-and-privacy-in-key-rulings/feed/ 0
Muddling Through: How We Learn to Cope with Technological Change http://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/ http://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/#comments Tue, 17 Jun 2014 17:38:18 +0000 http://techliberation.com/?p=74622

How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?

In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).

It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues.

Optimistic (“Heaven”) vs. Pessimistic (“Hell”) Scenarios

Modern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.

In the past century, for example, French philosopher Jacques Ellul (The Technological Society), German historian Oswald Spengler (Man and Technics), and American historian Lewis Mumford (Technics and Civilization) penned critiques of modern technological processes that took a dour view of technological innovation and our collective ability to adapt positively to it. (Concise summaries of their thinking can be found in Christopher May’s edited collection of essays, Key Thinkers for the Information Society.)

These critics worried about the subjugation of humans to “technique” or “technics” and feared that technology and technological processes would come to control us before we learned how to control them. Media theorist Neil Postman was the most notable of the modern information technology critics and served as the bridge between the industrial era critics (like Ellul, Spengler, and Mumford) and some of today’s digital age skeptics (like Evgeny Morozov and Nick Carr). Postman decried the rise of a “technopoly” — “the submission of all forms of cultural life to the sovereignty of technique and technology” — that would destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.” We see that attitude on display in countless works of technological criticism since then.

Of course, there’s been some pushback from some futurists and technological enthusiasts. But there’s often a fair amount of irrational exuberance at work in their tracts and punditry. Many self-proclaimed “futurists” have predicted that various new technologies would produce a nirvana that would overcome human want, suffering, ignorance, and more.

In a 2010 essay, I labeled these two camps technological “pessimists” and “optimists.” It was a crude and overly-simplistic dichotomy, but it was an attempt to begin sketching out a rough taxonomy of the personalities and perspectives that we often seen pitted against each other in debates about the impact of technology on culture and humanity.

Sadly, when I wrote that earlier piece, I was not aware of a similar (and much better) framing of this divide that was developed by science writer Joel Garreau in his terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. In that book, Garreau is thinking in much grander terms about technology and the future than I was in my earlier essay. He was focused on how various emerging technologies might be changing our very humanity and he notes that narratives about these issues are typically framed in “Heaven” versus “Hell” scenarios.

Under the “Heaven” scenario, technology drives history relentlessly, and in almost every way for the better. As Garreau describes the beliefs of the Heaven crowd, they believe that going forward, “almost unimaginably good things are happening, including the conquering of disease and poverty, but also an increase in beauty, wisdom, love, truth, and peace.” (p. 130) By contrast, under the “Hell” scenario, “technology is used for extreme evil, threatening humanity with extinction.” (p. 95) Garreau notes that what unifies the Hell scenario theorists is the sense that in “wresting power from the gods and seeking to transcend the human condition,” we end up instead creating a monster — or maybe many different monsters — that threatens our very existence. Garreau says this “Frankenstein Principle” can be seen in countless works of literature and technological criticism throughout history, and it is still very much with us today. (p. 108)

Theories of Collapse: Why Does Doomsaying Dominate Discussions about New Technologies?

Indeed, in examining the way new technologies and inventions have long divided philosophers, scientists, pundits, and the general public, one can find countless examples of that sort of fear and loathing at work. “Armageddon has a long and distinguished history,” Garreau notes. “Theories of progress are mirrored by theories of collapse.” (p. 149)

In that regard, Garreau rightly cites Arthur Herman’s magisterial history of apocalyptic theories, The Idea of Decline in Western History, which documents “declinism” over time. The irony of much of this pessimistic declinist thinking, Herman notes, is that:

In effect, the very things modern society does best — providing increasing economic affluence, equality of opportunity, and social and geographic mobility — are systematically deprecated and vilified by its direct beneficiaries. None of this is new or even remarkable.” (p. 442)

Why is that? Why has the “Hell” scenario been such a dominant reoccurring theme in past writing and commentary throughout history, even though the general trend has been steady improvements in human health, welfare, and convenience?

There must be something deeply rooted in the human psyche that accounts for this tendency. As I have discussed in my new book as well as my big “Technopanics” law review article, our innate tendency to be pessimistic but also want to be certain about the future means that “the gloom-mongers have it easy,” as author Dan Gardner argues in his book, Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better. He continues on to note of the techno-doomsday pundits:

Their predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is exactly right, because knowing that the future will be dark is less tormenting than suspecting it. Certainty is always preferable to uncertainty, even when what’s certain is disaster. (p. 140-1)

Similarly, in his new book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, Clive Thompson notes that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.” (p. 283)

Another explanation is that humans are sometimes very poor judges of the relative risks to themselves or those close to them. Harvard University psychology professor Steven Pinker, author of The Blank Slate: The Modern Denial of Human Nature, notes:

The mind is more comfortable in reckoning probabilities in terms of the relative frequency of remembered or imagined events. That can make recent and memorable events—a plane crash, a shark attack, an anthrax infection—loom larger in one’s worry list than more frequent and boring events, such as the car crashes and ladder falls that get printed beneath the fold on page B14. And it can lead risk experts to speak one language and ordinary people to hear another. (p. 232)

Put simply, there exists a wide variety of explanations for why our collective first reaction to new technologies often is one of dystopian dread. In my work, I have identified several other factors, including: generational differences; hyper-nostalgia; media sensationalism; special interest pandering to stoke fears and sell products or services; elitist attitudes among intellectuals; and the so-called “third-person effect hypothesis,” which posits that when some people encounter perspectives or preferences at odds with their own, they are more likely to be concerned about the impact of those things on others throughout society and to call on government to “do something” to correct or counter those perspectives or preferences.

Some combination of these factors ends up driving the initial resistance we have see to new technologies that disrupted long-standing social norms, traditions, and institutions. In the extreme, it results in that gloom-and-doom, sky-is-falling disposition in which we are repeatedly told how humanity is about to be steam-rolled by some new invention or technological development.

The “Prevail” (or “Muddling Through”) Scenario

“The good news is that end-of-the-world predictions have been around for a very long time, and none of them has yet borne fruit,” Garreau reminds us. (p. 148) Why not? Let’s get back to his framework for the answer. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

That pretty much sums up my own perspective on things, and in the remainder of this essay I want sketch out the reasons why I think the “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process.

As Garreau explains it, under the “Prevail” scenario, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he rightly notes. (p. 154) As John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

It is this process of “constantly forming and reforming new dynamic equilibriums” that interests me most. In a recent exchange with Michael Sacasas – one of the most thoughtful modern technology critics I’ve come across — I noted that the nature of individual and societal acclimation to technological change is worthy of serious investigation if for no other reason that it has continuously happened! What I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies disrupted our personal, social, economic, cultural, and legal norms.

In a response to me, Sacasas put forth the following admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” This is undoubtedly true, but it does not undermine the reality of societal adaptation. What can we learn from this? What were the mechanics of that adaptive process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?

Of course, this raises an entirely different issue: What metrics are we using to judge whether “the changes were inconsequential or benign”? As I noted in my exchange with Sacasas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”

Resiliency: Why Do the Skeptics Never Address It (and Its Benefits)?

Nonetheless, I believe that while technological change often brings sweeping and quite consequential change, there is great value in the very act of living through it.

In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

What we’re talking about here is resiliency. Andrew Zolli and Ann Marie Healy, authors of Resilience: Why Things Bounce Back, define resilience as “the capacity of a system, enterprise, or a person to maintain its core purpose and integrity in the face of dramatically changed circumstances.” (p. 7) “To improve your resilience,” they note, “is to enhance your ability to resist being pushed from your preferred valley, while expanding the range of alternatives that you can embrace if you need to. This is what researchers call preserving adaptive capacity—the ability to adapt to changed circumstances while fulfilling once core purpose—and it’s an essential skill in an age of unforeseeable disruption and volatility.” (p. 7-8, emphasis in original) Moreover, they note, “by encouraging adaptation, agility, cooperation, connectivity, and diversity, resilience-thinking can bring us to a different way of being in the world, and to a deeper engagement with it.” (p. 16)

Even if you one doesn’t agree with all of that, again, I would think one would find great value in studying the process by which such adaptation happens precisely because it does happen so regularly. And then we could argue about whether it was all really worth it! Specially, was it worth whatever we lost in the process (i.e., a change in our old moral norms, our old privacy norms, our old institutions, our old business models, our old laws, or whatever else)?

As Sacasas correctly argues, “That people before us experienced similar problems does not mean that they magically cease being problems today.” Again, quite right. On the other hand, the fact that people and institutions learned to cope with those concerns and become more resilient over time is worthy of serious investigation because somehow we “muddled through” before and we’ll have to muddle through again. And, again, what we learned from living through that process may be extremely valuable in its own right.

Of Course, Muddling Through Isn’t Always Easy

Now, let’s be honest about this process of “muddling through”: it isn’t always neat or pretty. To put it crudely, sometimes muddling through really sucks! Think about the modern technologies that violate our visceral sense of privacy and personal space today. I am an intensely private person and if I had a life motto it would probably be: “Leave Me Alone!” Yet, sometimes there’s just no escaping the pervasive reach of modern technologies and processes. On the other hand, I know that, like so many others, I derive amazing benefits from all these new technologies, too. So, like most everyone else I put up with the downsides because, on net, there are generally more upsides.

Almost every digital service that we use today presents us with these trade-offs. For example, email has allowed us to connect with a constantly growing universe of our fellow humans and organizations. Yet, spam clutters our mailboxes and the sheer volume of email we get sometimes overwhelms us. Likewise, in just the past five years, smartphones have transformed our lives in so many ways for the better in terms of not just personal convenience but also personal safety. On the other hand, smartphones have become more than a bit of nuisance in certain environments (theaters, restaurants, and other closed spaces.) And they also put our safety at risk when we use them while driving automobiles.

But, again, we adjust to most of these new realities and then we find constructive solutions to the really hard problems – yes, and that sometimes includes legal remedies to rectify serious harms. But a certain amount of social adaptation will, nonetheless, be required. Law can only slightly slow that inevitability; it can’t stop it entirely. And as messy and uncomfortable as muddling through can be, we have to (a) be aware of what we gain in the process and (b) ask ourselves what the cost of taking the alternative path would be. Attempts to through a wrench in the works and derail new innovations or delay various types of technological change are always going to be tempting, but such interventions will come at a very steep cost: less entreprenurialism, diminished competition, stagnant markets, higher prices, and fewer choices for citizens. As I note in my new book, if we spend all our time living in constant fear of worst-case scenarios — and premising public policy upon such fears — it means that many best-case scenarios will never come about.

Social Resistance / Pressure Dynamics

There’s another part to this story that often gets overlooked. “Muddling through” isn’t just some sort of passive process where individuals and institutions have to figure out how to cope with technological change. Rather, there is an active dynamic at work, too. Individuals and institutions push back and actively shape their tools and systems.

In a recent Wired essay on public attitudes about emerging technologies such as the controversial Google Glass, Issie Lapowsky noted that:

If the stigma surrounding Google Glass (or, perhaps more specifically, “Glassholes”) has taught us anything, it’s that no matter how revolutionary technology may be, ultimately its success or failure ride on public perception. Many promising technological developments have died because they were ahead of their times. During a cultural moment when the alleged arrogance of some tech companies is creating a serious image problem, the risk of pushing new tech on a public that isn’t ready could have real bottom-line consequences.

In my new book, I spend some time think about this process of “norm-shaping” through social pressure, activist efforts, educational steps, and even public shaming. A recent Ars Technica essay by Joe Silver offered some powerful examples of how when “shamed on Twitter, corporations do an about-face.” Silver notes that “A few recent case-study examples of individuals who felt they were wronged by corporations and then took to the Twitterverse to air their grievances show how a properly placed tweet can be a powerful weapon for consumers to combat corporate malfeasance.” In my book and in recent law review articles, I have provided other examples how this works at both a corporate and individual level to constrain improper behavior and protect various social norms.

Edmund Burke once noted that, “Manners are of more importance than laws. Manners are what vex or soothe, corrupt or purify, exalt or debase, barbarize or refine us, by a constant, steady, uniform, insensible operation, like that of the air we breathe in.” Cristina Bicchieri, a leading behavioral ethicist, calls social norms “the grammar of society” because,

like a collection of linguistic rules that are implicit in a language and define it, social norms are implicit in the operations of a society and make it what it is. Like a grammar, a system of norms specifies what is acceptable and what is not in a social group. And analogously to a grammar, a system of norms is not the product of human design and planning.

Put simply, more than law can regulate behavior — whether it is organizational behavior or individual behavior. It’s yet another way we learn to cope and “muddle through” over time. Again, check out my book for several other examples.

A Case Study: The Long-Standing “Problem” of Photography

Let’s bring all this together and be more concrete about it by using a case study: photography. With all the talk of how unsettling various modern technological developments are, they really pale in comparison to just how jarring the advent of widespread public photography must have been in the late 1800s and beyond. “For the first time photographs of people could be taken without their permission—perhaps even without their knowledge,” notes Lawrence M. Friedman in his 2007 book, Guiding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy.

Thus, the camera was viewed as a highly disruptive force as photography became more widespread. In fact, the most important essay ever written on privacy law, Samuel D. Warren and Louis D. Brandeis’s famous 1890 Harvard Law Review essay on “The Right to Privacy,” decried the spread of public photography. The authors lamented that “instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life” and claimed that “numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’”

Warren and Brandeis weren’t alone. Plenty of other critics existed and many average citizens were probably outraged by the rise of cameras and public photography. Yet, personal norms and cultural attitudes toward cameras and public photography evolved quite rapidly and they became ingrained in human experience. At the same time, social norms and etiquette evolved to address those who would use cameras in inappropriate, privacy-invasive ways.

Again, we muddled through. And we’ve had to continuously muddle through in this regard because photography presents us with a seemingly endless set of new challenges. As cameras grow still smaller and get integrated into other technologies (most recently, smartphones, wearable technologies, and private drones), we’ve had to learn to adjust and accommodate. With wearables technologies (check out Narrative, Butterflye, and Autographer, for example), personal drones (see “Drones are the future of selfies,”) and other forms of microphotography all coming online now, we’ll have to adjust still more and develop new norms and coping mechanisms. There’s never going to be an end to this adjustment process.

Toward Pragmatic Optimism

Should we really remain bullish about humanity’s prospects in the midst of all this turbulent change? I think so.

Again, long before the information revolution took hold, the industrial revolution produced its share of cultural and economic backlashes, and it is still doing so today. Most notably, many Malthusian skeptics and environmental critics lamented the supposed strain of population growth and industrialization on social and economic life. Catastrophic predictions followed.

In his 2007 book, Prophecies of Doom and Scenarios of Progress, Paul Dragos Aligicia, a colleague of mine at the Mercatus Center, documented many of these industrial era “prophecies of doom” and described how this “doomsday ideology” was powerfully critiqued by a handful of scholars — most notably Herman Kahn and Julian Simon. Aligicia explains that Kahn and Simon argued for, “the alternative paradigm, the pro-growth intellectual tradition that rejected the prophecies of doom and called for realism and pragmatism in dealing with the challenge of the future.”

Kahn and Simon were pragmatic optimists or what author Matt Ridley calls “rational optimists.” They were bullish about the future and the prospects for humanity, but they were not naive regarding the many economic and scosial challenges associated with technological change. Like Kahn and Simon, we should embrace the amazing technological changes at work in today’s information age but with a healthy dose of humility and appreciation for the disruptive impact and pace of that change.

But the rational optimists never get as much attention as the critics and catastrophists. “For 200 years pessimists have had all the headlines even though optimists have far more often been right,” observes Ridley. “Arch-pessimists are feted, showered with honors and rarely challenged, let alone confronted with their past mistakes.” At least part of the reason for that, as already noted, goes back to the amazing rhetorical power of good intentions. Techno-pessimists often exhibit a deep passion about their particular cause and are typically given more than just the benefit of doubt in debates about progress and the future; they are treated as superior to opponents who challenge their perspectives or proposals. When a privacy advocate says they are just looking out consumers, or an online safety claims they have the best interests of children in mind, or a consumer advocate argues that regulation is needed to protect certain people from some amorphous harm, they are assuming the moral high ground through the assertion of noble-minded intentions. Even if their proposals will often fail to bring about the better state of affairs they claim or derail life-enriching innovations, they are more easily forgiven for those mistakes precisely because of their fervent claim of noble-minded intentions.

If intentions are allowed to trump empiricism and a general openness to change, however, the results for a free society and for human progress will be profoundly deleterious. That is why, when confronted with pessimistic, fear-based arguments, the pragmatic optimist must begin by granting that the critics clearly have the best of intentions, but then point out how intentions can only get us so far in the real-world, which is full of complex trade-offs.

The pragmatic optimist must next meticulously and dispassionately outline the many reasons why restricting progress or allowing planning to enter the picture will have many unintended consequences and hidden costs. The trade-offs must be explained in clear terms. Examples of previous interventions that went wrong must be proffered.

The Evidence Speaks for Itself

Luckily, we pragmatic optimists have plenty of evidence working in our favor when making this case. As Pulitzer Prize-winning historian Richard Rhodes noted in his 1999 book, Visions of Technology: A Century of Vital Debate About Machines Systems And The Human World:

it’s surprising that [many intellectual] don’t value technology; by any fair assessment, it has reduced suffering and improved welfare across the past hundred years. Why doesn’t this net balance of benevolence inspire at least grudging enthusiasm for technology among intellectuals? (p. 23)

Great question, and one that we should never stop asking the techno-critics to answer. After all, as Joel Mokyr notes in his wonderful 1990 book, Lever of Riches: Technological Creativity and Economic Progress, “Without [technological creativity], we would all still live nasty and short lives of toil, drudgery, and discomfort.” (p. viii) “Technological progress, in that sense, is worthy of its name,” he says. “It has led to something that we may call an ‘achievement,’ namely the liberation of a substantial portion of humanity from the shackles of subsistence living.” (p. 288) Specifically,

The riches of the post-industrial society have meant longer and healthier lives, liberation from the pains of hunger, from the fears of infant mortality, from the unrelenting deprivation that were the part of all but a very few in preindustrial society. The luxuries and extravagances of the very rich in medieval society pale compared to the diet, comforts, and entertainment available to the average person in Western economies today. (p. 303)

In his new book, Smaller Faster Lighter Denser Cheaper: How Innovation Keeps Proving the Catastrophists Wrong, Robert Bryce hammers this point home when he observes that:

The pessimistic worldview ignores an undeniable truth: more people are living longer, healthier, freer, more peaceful, lives than at any time in human history… the plain reality is that things are getting better, a lot better, for tens of millions of people around the world. Dozens of factors can be cited for the improving conditions of humankind. But the simplest explanation is that innovation is allowing us to do more with less.

This is framework Herman Kahn, Julian Simon, and the other champions of progress used to deconstruct and refute the pessimists of previous eras. In line with that approach, we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. As Kahn taught us long ago, is that when it comes to technological progress and humanity’s ingenious responses to it, “we should expect to go on being surprised” — and in mostly positive ways. Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies. As Mokyr noted in his recent City Journal essay on “The Next Age of Invention”:

Much like medication, technological progress almost always has side effects, but bad side effects are rarely a good reason not to take medication and a very good reason to invest in the search for second-generation drugs. To a large extent, technical innovation is a form of adaptation—not only to externally changing circumstances but also to previous adaptations.

In sum, we need to have a little faith in the ability of humanity to adjust to an uncertain future, no matter what it throws at us. We’ll muddle through and come out better because of what we have learned in the process, just as we have so many times before.

I’ll give venture capitalist Marc Andreessen the last word on this since he’s been on an absolute tear on Twitter lately when discussing many of the issues I’ve raised in this essay. While addressing the particular fear that automation is running amuck and that robots will eat all our jobs, Andreessen eloquently noted:

We have no idea what the fields, industries, businesses, and jobs of the future will be. We just know we will create an enormous number of them. Because if robots and AI replace people for many of the things we do today, the new fields we create will be built on the huge number of people those robots and AI systems made available. To argue that huge numbers of people will be available but we will find nothing for them (us) to do is to dramatically short human creativity. And I am way long human creativity.

Me too, buddy. Me too.

______________________________________

Additional Reading:

Journal articles & book chapters:

Blog posts:

]]>
http://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/feed/ 4
New Law Review Article: “Privacy Law’s Precautionary Principle Problem” http://techliberation.com/2014/06/16/new-law-review-article-privacy-laws-precautionary-principle-problem/ http://techliberation.com/2014/06/16/new-law-review-article-privacy-laws-precautionary-principle-problem/#comments Mon, 16 Jun 2014 17:50:30 +0000 http://techliberation.com/?p=74607

My latest law review article is entitled, “Privacy Law’s Precautionary Principle Problem,” and it appears in Vol. 66, No. 2 of the Maine Law Review. You can download the article on my Mercatus Center page, on the Maine Law Review website, or via SSRN. Here’s the abstract for the article:

Privacy law today faces two interrelated problems. The first is an information control problem. Like so many other fields of modern cyberlaw—intellectual property, online safety, cybersecurity, etc.—privacy law is being challenged by intractable Information Age realities. Specifically, it is easier than ever before for information to circulate freely and harder than ever to bottle it up once it is released.

This has not slowed efforts to fashion new rules aimed at bottling up those information flows. If anything, the pace of privacy-related regulatory proposals has been steadily increasing in recent years even as these information control challenges multiply.

This has led to privacy law’s second major problem: the precautionary principle problem. The precautionary principle generally holds that new innovations should be curbed or even forbidden until they are proven safe. Fashioning privacy rules based on precautionary principle reasoning necessitates prophylactic regulation that makes new forms of digital innovation guilty until proven innocent.

This puts privacy law on a collision course with the general freedom to innovate that has thus far powered the Internet revolution, and privacy law threatens to limit innovations consumers have come to expect or even raise prices for services consumers currently receive free of charge. As a result, even if new regulations are pursued or imposed, there will likely be formidable push-back not just from affected industries but also from their consumers.

In light of both these information control and precautionary principle problems, new approaches to privacy protection are necessary. We need to invert the process of how we go about protecting privacy by focusing more on practical “bottom-up” solutions—education, empowerment, public and media pressure, social norms and etiquette, industry self-regulation and best practices, and an enhanced role for privacy professionals within organizations—instead of “top-down” legalistic solutions and regulatory techno-fixes. Resources expended on top-down regulatory pursuits should instead be put into bottom-up efforts to help citizens better prepare for an uncertain future.

In this regard, policymakers can draw important lessons from the debate over how best to protect children from objectionable online content. In a sense, there is nothing new under the sun; the current debate over privacy protection has many parallels with earlier debates about how best to protect online child safety. Most notably, just as top-down regulatory constraints came to be viewed as constitutionally-suspect and economically inefficient, and also highly unlikely to even be workable in the long-run for protecting online child safety, the same will likely be true for most privacy related regulatory enactments.

This article sketches out some general lessons from those online safety debates and discusses their implications for privacy policy going forward.

Read the full article here [PDF].

Related Material:

 

Adam Thierer – Privacy Law's Precautionary Problem (Maine Law Review, 2014) by Adam Thierer

]]>
http://techliberation.com/2014/06/16/new-law-review-article-privacy-laws-precautionary-principle-problem/feed/ 0
video: Cap Hill Briefing on Emerging Tech Policy Issues http://techliberation.com/2014/06/12/video-cap-hill-briefing-on-emerging-tech-policy-issues/ http://techliberation.com/2014/06/12/video-cap-hill-briefing-on-emerging-tech-policy-issues/#comments Thu, 12 Jun 2014 15:53:33 +0000 http://techliberation.com/?p=74611

I recently did a presentation for Capitol Hill staffers about emerging technology policy issues (driverless cars, the “Internet of Things,” wearable tech, private drones, “biohacking,” etc) and the various policy issues they would give rise to (privacy, safety, security, economic disruptions, etc.). The talk is derived from my new little book on “Permissionless Innovation,” but in coming months I will be releasing big papers on each of the topics discussed here.

Additional Reading:

]]>
http://techliberation.com/2014/06/12/video-cap-hill-briefing-on-emerging-tech-policy-issues/feed/ 0
Has Copyright Gone Too Far? Watch This “Hangout” to Find Out http://techliberation.com/2014/06/09/has-copyright-gone-too-far-watch-this-hangout-to-find-out/ http://techliberation.com/2014/06/09/has-copyright-gone-too-far-watch-this-hangout-to-find-out/#comments Tue, 10 Jun 2014 01:20:19 +0000 http://techliberation.com/?p=74599

Last week, the Mercatus Center and the R Street Institute co-hosted a video discussion about copyright law. I participated in the Google Hangout, along with co-liberator Tom Bell of Chapman Law School (and author of the new book Intellectual Privilege), Mitch Stoltz of the Electronic Frontier Foundation, Derek Khanna, and Zach Graves of the R Street Institute. We discussed the Aereo litigation, compulsory licensing, statutory damages, the constitutional origins of copyright, and many more hot copyright topics.

You can watch the discussion here:

 

]]>
http://techliberation.com/2014/06/09/has-copyright-gone-too-far-watch-this-hangout-to-find-out/feed/ 0
Outdated Policy Decisions Don’t Dictate Future Rights in Perpetuity http://techliberation.com/2014/06/09/outdated-policy-decisions-dont-dictate-future-rights-in-perpetuity/ http://techliberation.com/2014/06/09/outdated-policy-decisions-dont-dictate-future-rights-in-perpetuity/#comments Mon, 09 Jun 2014 13:19:04 +0000 http://techliberation.com/?p=74596

Congressional debates about STELA reauthorization have resurrected the notion that TV stations “must provide a free service” because they “are using public spectrum.” This notion, which is rooted in 1930s government policy, has long been used to justify the imposition of unique “public interest” regulations on TV stations. But outdated policy decisions don’t dictate future rights in perpetuity, and policymakers abandoned the “public spectrum” rationale long ago.

All wireless services use the public spectrum, yet none of them are required to provide a free commercial service except broadcasters. Satellite television operators, mobile service providers, wireless Internet service providers, and countless other commercial spectrum users are free to charge subscription fees for their services.

There is nothing intrinsic in the particular frequencies used by broadcasters that justifies their discriminatory treatment. Mobile services use spectrum once allocated to broadcast television, but aren’t treated like broadcasters.

The fact that broadcast licenses were once issued without holding an auction is similarly irrelevant. All spectrum licenses were granted for free before the mid-1990s. For example, cable and satellite television operators received spectrum licenses for free, but are not required to offer their video services for free.

If the idea is to prevent companies who were granted free licenses from receiving a “windfall”, it’s too late. As Jeffrey A. Eisenach has demonstrated, “the vast majority of current television broadcast licensees [92%] have paid for their licenses through station transactions.”

The irrelevance of the free spectrum argument is particularly obvious when considering the differential treatment of broadcast and satellite spectrum. Spectrum licenses for broadcast TV stations are now subject to competitive bidding at auction while satellite television licenses are not. If either service should be required to provide a free service on the basis of spectrum policy, it should be satellite television.

Although TV stations were loaned an extra channel during the DTV transition, the DTV transition is over. Those channels have been returned and were auctioned for approximately $19 billion in 2008. There is no reason to hold TV stations accountable in perpetuity for a temporary loan.

Even if there were, the loan was not free. Though TV stations did not pay lease fees for the use of those channels, they nevertheless paid a heavy price. TV stations were required to invest substantial sums in HDTV technology and to broadcast signals in that format long before it was profitable. The FCC required “rapid construction of digital facilities by network-affiliated stations in the top markets, in order to expose a significant number of households, as early as possible, to the benefits of DTV.” TV stations were thus forced to “bear the risks of introducing digital television” for the benefit of consumers, television manufacturers, MVPDs, and other digital media.

The FCC did not impose comparable “loss leader” requirements on MVPDs. They are free to wait until consumer demand for digital and HDTV content justifies upgrading their systems — and they are still lagging TV stations by a significant margin. According to the FCC, only about half of the collective footprints of the top eight cable MVPDs had been transitioned to all-digital channels at the end of 2012. By comparison, the DTV transition was completed in 2009.

There simply is no satisfactory rationale for requiring broadcasters to provide a free service based on their use of spectrum or the details of past spectrum licensing decisions. If the applicability of a free service requirement turned on such issues, cable and satellite television subscribers wouldn’t be paying subscription fees.

]]>
http://techliberation.com/2014/06/09/outdated-policy-decisions-dont-dictate-future-rights-in-perpetuity/feed/ 0
Son’s Criticism of U.S. Broadband Misleading and Misplaced http://techliberation.com/2014/06/02/sons-criticism-of-u-s-broadband-misleading-and-misplaced/ http://techliberation.com/2014/06/02/sons-criticism-of-u-s-broadband-misleading-and-misplaced/#comments Mon, 02 Jun 2014 23:43:19 +0000 http://techliberation.com/?p=74590

Chairman and CEO Masayoshi Son of SoftBank again criticized U.S. broadband (see this and this) at last week’s Code Conference.

The U.S. created the Internet, but its speeds rank 15th out of 16 major countries, ahead of only the Philippines.  Mexico is No. 17, by the way.

It turns out that Son couldn’t have been referring to the broadband service he receives from Comcast, since the survey data he was citing—as he has in the past—appears to be from OpenSignal and was gleaned from a subset of the six million users of the OpenSignal app who had 4G LTE wireless access in the second half of 2013.

Oh, and Son neglected to mention that immediately ahead of the U.S. in the OpenSignal survey is Japan.

Son, who is also the chairman of Sprint, has a legitimate grievance with overzealous U.S. antitrust enforcers.  But he should be aware that for many years the proponents of network neutrality regulation have cited international rankings in support of their contention that the U.S. broadband market is under-regulated.

It is a well-established fact that measuring broadband speeds and prices from one country to the next is difficult as a result of “significant gaps and variations in data collection methodologies,” and that “numerous market, regulatory, and geographic factors determine penetration rates, prices, and speeds.”  See, e.g., the  Federal Communications Commission’s most recent International Broadband Data Report.  In the case of wireless services, as one example, the availability of sufficient airwaves can have a huge impact on speeds and prices.  Airwaves are assigned by the FCC.

There are some bright spots in the broadband comparisons published by a number of organizations.

For example,  U.S. consumers pay the third lowest average price for entry-level fixed broadband of 161 countries surveyed by ITU (the International Telecommunications Union).

And as David Balto notes over at Huffington Post, Akamai reports that the average connection speeds in Japan and the U.S. aren’t very far apart—12.8 megabits per second in Japan versus 10 Mbps in the U.S.

Actual speeds experienced by broadband users reflect the service tiers consumers choose to purchase, and not everyone elects to pay for the highest available speed. It’s unfair to blame service providers for that.

A more relevant metric for judging service providers is investment.  ITU reports that the U.S. leads every other nation in telecommunications investment by far.  U.S. service providers invested more than $70 billion in 2010 versus less than $17 billion in Japan.  On a per capita basis, telecom investment in the U.S. is almost twice that of Japan.

In Europe, per capita investment in telecommunications infrastructure is less than half what it is in the U.S., according to Martin Thelle and Bruno Basalisco.

Incidentally, the European Commission has concluded,

Networks are too slow, unreliable and insecure for most Europeans; Telecoms companies often have huge debts, making it hard to invest in improvements. We need to turn the sector around so that it enables more productivity, jobs and growth.

It should be noted that for the past decade or so Europe has been pursuing the same regulatory strategy that net neutrality boosters are advocating for the U.S.  Thelle and Basalisco observe that,

The problem with the European unbundling regulation is that it pitted short-term consumer benefits, such as low prices, against the long-run benefits from capital investment and innovation. Unfortunately, regulators often sacrificed the long-term interest by forcing an infrastructure owner to share its physical wires with competing operators at a cheap rate. Thus, the regulated company never had a strong incentive to invest in new infrastructure technologies — a move that would considerably benefit the competing operators using its infrastructure.

Europe’s experience with the unintended consequences of unnecessary regulation is perhaps the most useful lesson the U.S. can learn from abroad.

]]>
http://techliberation.com/2014/06/02/sons-criticism-of-u-s-broadband-misleading-and-misplaced/feed/ 5
Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600 http://techliberation.com/2014/05/30/mark-t-williams-predicted-bitcoins-price-would-be-under-10-by-now-its-over-600/ http://techliberation.com/2014/05/30/mark-t-williams-predicted-bitcoins-price-would-be-under-10-by-now-its-over-600/#comments Fri, 30 May 2014 14:43:41 +0000 http://techliberation.com/?p=74581

In April I had the opportunity to testify before the House Small Business Committee on the costs and benefits of small business use of Bitcoin. It was a lively hearing, especially thanks to fellow witness Mark T. Williams, a professor of finance at Boston University. To say he was skeptical of Bitcoin would be an understatement.

Whenever people make the case that Bitcoin will inevitably collapse, I ask them to define collapse and name a date by which it will happen. I sometimes even offer to make a bet. As Alex Tabarrok has explained, bets are a tax on bullshit.

So one thing I really appreciate about Prof. Williams is that unlike any other critic, he has been willing to make a clear prediction about how soon he thought Bitcoin would implode. On December 10, he told Tim Lee in an interview that he expected Bitcoin’s price to fall to under $10 in the first half of 2014. A week later, on December 17, he clearly reiterated his prediction in an op-ed for Business Insider:

I predict that Bitcoin will trade for under $10 a share by the first half of 2014, single digit pricing reflecting its option value as a pure commodity play.

Well, you know where this is going. We’re now five months into the year. How is Bitcoin doing?

coindesk-bpi-chart

It’s in the middle of a rally, with the price crossing $600 for the first time in a couple of months. Yesterday Dish Networks announced it would begin accepting Bitcoin payments from customers, making it the largest company yet to do so.

None of this is to say that Bitcoin’s future is assured. It is a new and still experimental technology. But I think we can put to bed the idea that it will implode in the short term because it’s not like any currency or exchange system that came before, which was essentially William’s argument.

]]>
http://techliberation.com/2014/05/30/mark-t-williams-predicted-bitcoins-price-would-be-under-10-by-now-its-over-600/feed/ 11
The Problem with “Pessimism Porn” http://techliberation.com/2014/05/23/the-problem-with-pessimism-porn/ http://techliberation.com/2014/05/23/the-problem-with-pessimism-porn/#comments Fri, 23 May 2014 19:54:52 +0000 http://techliberation.com/?p=74568

I’ve spent a lot of time here through the years trying to identify the factors that fuel moral panics and “technopanics.” (Here’s a compendium of the dozens of essays I’ve written here on this topic.) I brought all this thinking together in a big law review article (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”) and then also in my new booklet, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.”

One factor I identify as contributing to panics is the fact that “bad news sells.” As I noted in the book, “Many media outlets and sensationalist authors sometimes use fear-based tactics to gain influence or sell books. Fear mongering and prophecies of doom are always effective media tactics; alarmism helps break through all the noise and get heard.”

In line with that, I want to highly recommend you check out this excellent new oped by John Stossel of Fox Business Network on “Good News vs. ‘Pessimism Porn‘.”  Stossel correctly notes that “the media win by selling pessimism porn.” He says:

Are you worried about the future? It’s hard not to be. If you watch the news, you mostly see violence, disasters, danger. Some in my business call it “fear porn” or “pessimism porn.” People like the stuff; it makes them feel alive and informed.

Of course, it’s our job to tell you about problems. If a plane crashes — or disappears — that’s news. The fact that millions of planes arrive safely is a miracle, but it’s not news. So we soak in disasters — and warnings about the next one: bird flu, global warming, potential terrorism. I won Emmys hyping risks but stopped winning them when I wised up and started reporting on the overhyping of risks. My colleagues didn’t like that as much.

He continues on to note how, even though all the data clearly proves that humanity’s lot is improving, the press relentlessly push the “pessimism porn.” He argues that “time and again, humanity survived doomsday. Not just survived, we flourish.” But that doesn’t stop the doomsayers from predicting that the sky is always set to fall. In particular, the press knows they can easily gin up more readers and viewers by amping up the fear-mongering and featuring loonies who will be all too happy to play the roles of pessimism porn stars. Of course, plenty of academics, activists, non-profit organizations and even companies are all too eager to contribute to this gloom-and-doom game since they benefit from the exposure or money it generates.

The problem with all this, of course, is that it perpetuates societal fears and distrust. It also sometimes leads to misguided policies based on hypothetical worst-case thinking. As I argue in my new book, which Stossel was kind enough to cite in his essay, if we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon them—it means that best-case scenarios will never come about.

Facts, not fear, should guide our thinking about the future.

______________________

Related Reading:

]]>
http://techliberation.com/2014/05/23/the-problem-with-pessimism-porn/feed/ 0
The Anticompetitive Effects of Broadcast Television Regulations http://techliberation.com/2014/05/22/the-anticompetitive-effects-of-broadcast-television-regulations/ http://techliberation.com/2014/05/22/the-anticompetitive-effects-of-broadcast-television-regulations/#comments Thu, 22 May 2014 15:44:29 +0000 http://techliberation.com/?p=74565

Shortly after Tom Wheeler assumed the Chairmanship at the Federal Communications Commission (FCC), he summed up his regulatory philosophy as “competition, competition, competition.” Promoting competition has been the norm in communications policy since Congress adopted the Telecommunications Act of 1996 in order to “promote competition and reduce regulation.” The 1996 Act has largely succeeded in achieving competition in communications markets with one glaring exception: broadcast television. In stark contrast to the pro-competitive approach that is applied in other market segments, Congress and the FCC have consistently supported policies that artificially limit the ability of TV stations to compete or innovate in the communications marketplace.

Radio broadcasting was not subject to regulatory oversight initially. In the unregulated era, the business model for over-the-air broadcasting was “still very much an open question.” Various methods for financing radio stations were proposed or attempted, including taxes on the sale of devices, private endowments, municipal or state financing, public donations, and subscriptions. “We are today so accustomed to the dominant role of the advertiser in broadcasting that we tend to forget that, initially, the idea of advertising on the air was not even contemplated and met with widespread indignation when it was first tried.”

Section 303 of the Communications Act of 1934 thus provided the FCC with broad authority to authorize over-the-air subscription television service (STV). When the D.C. Circuit Court of Appeals addressed this provision, it held that “subscription television is entirely consistent with [the] goals” of the Act. Analog STV services did not become widespread in the marketplace, however, due in part to regulatory limitations imposed on such services by the FCC. As a result, advertising dominated television revenue in the analog era.

The digital television (DTV) transition offered a new opportunity for TV stations to provide STV services in competition with MVPDs. The FCC had initially hoped that “multicasting” and other new capabilities provided by digital technologies would “help ensure robust competition in the video market that will bring more choices at less cost to American consumers.”

Despite the agency’s initial optimism, regulatory restrictions once again crushed the potential for TV stations to compete in other segments of the communications marketplace. When broadcasters proposed offering digital STV services with multiple broadcast and cable channels in order to compete with MVPDs, Congress held a hearing to condemn the innovation. Chairmen from both House and Senate committees threatened retribution against broadcasters if they pursued subscription television services — “There will be a quid pro quo.” Broadcasters responded to these Congressional threats by abandoning their plans to compete with MVPDs.

It’s hard to miss the irony in the 1996 Act’s approach to the DTV transition. Though the Act’s stated purposes are to “promote competition and reduce regulation, it imposed additional regulatory requirements on television stations that have stymied their ability to innovate and compete. The 1996 Act broadcasting provision requires that the FCC impose limits on subscription television services “so as to avoid derogation of any advanced television services, including high definition television broadcasts, that the Commission may require using such frequencies,” and prohibits TV stations from being deemed an MVPD. The FCC’s rules require TV stations to “transmit at least one over-the-air video programming signal at no direct charge to viewers” because “free, over-the-air television is a public good, like a public park, and might not exist otherwise.

These and other draconian legislative and regulatory limitations have forced TV stations to follow the analog television business model into the 21st Century while the rest of the communications industry innovated at a furious pace. As a result of this government-mandated broadcast business model, TV stations must rely on advertising and retransmission consent revenue for their survival.

Though the “public interest” status of TV stations may once have been considered a government benefit, it is rapidly becoming a curse. Congress and the FCC have both relied on the broadcast public interest shibboleth to impose unique and highly burdensome regulatory obligations on TV stations that are inapplicable to their competitors in the advertising and other potential markets. This disparity in regulatory treatment has increased dramatically under the current administration — to the point that is threatening the viability of broadcast television.

Here are just three examples of the ways in which the current administration has widened the regulatory chasm between TV stations and their rivals:

  • In 2012, the FCC required only TV stations to post “political file” documents online, including the rates charged by TV stations for political advertising; MVPDs are not required to post this information online. This regulatory disparity gives political ad buyers and incentive to advertise on cable rather than broadcast channels and forces TV stations to disclose sensitive pricing information more widely than their competitors.
  • This year the FCC prohibited joint sales agreements for television stations only; MVPDs and online content distributors are not subject to any such limitations on their advertising sales. This prohibition gives MVPDs and online advertising platforms a substantial competitive advantage in the market for advertising sales.
  • This year the FCC also prohibited bundled programming sales by broadcasters only; cable networks are not subject to any limitations on the sale of programming in bundles. This disparity gives broadcast networks an incentive to avoid limitations on their programming sales by selling exclusively to MVPDs (i.e., becoming cable networks).

The FCC has not made any attempt to justify the differential treatment — because there is no rational justification for arbitrary and capricious decision-making.

Sadly, the STELA process in the Senate is threatening to make things worse. Some legislative proposals would eliminate retransmission consent and other provisions that provide the regulatory ballast for broadcast television’s government mandated business model without eliminating the mandate. This approach would put a quick end to the administration’s “death by a thousand cuts” strategy with one killing blow. The administration must be laughing itself silly. When TV channels in smaller and rural markets go dark, this administration will be gone — and it will be up to Congress to explain the final TV transition.

]]>
http://techliberation.com/2014/05/22/the-anticompetitive-effects-of-broadcast-television-regulations/feed/ 0
Network Non-Duplication and Syndicated Exclusivity Rules Are Fundamental to Local Television http://techliberation.com/2014/05/19/network-non-duplication-and-syndicated-exclusivity-rules-are-fundamental-to-local-television/ http://techliberation.com/2014/05/19/network-non-duplication-and-syndicated-exclusivity-rules-are-fundamental-to-local-television/#comments Mon, 19 May 2014 19:13:22 +0000 http://techliberation.com/?p=74561

The Federal Communications Commission (FCC) recently sought additional comment on whether it should eliminate its network non-duplication and syndicated exclusivity rules (known as the “broadcasting exclusivity” rules). It should just as well have asked whether it should eliminate its rules governing broadcast television. Local TV stations could not survive without broadcast exclusivity rights that are enforceable both legally and practicably.

The FCC’s broadcast exclusivity rules “do not create rights but rather provide a means for the parties to exclusive contracts to enforce them through the Commission rather than the courts.” (Broadcast Exclusivity Order, FCC 88-180 at ¶ 120 (1988)) The rights themselves are created through private contracts between TV stations and video programming vendors in the same manner that MVPDs create exclusive rights to distribute cable network programming.

Local TV stations typically negotiate contracts for the exclusive distribution of national broadcast network or syndicated programming in their respective local markets in order to preserve their ability to obtain local advertising revenue. The FCC has long recognized that, “When the same program a [local] broadcaster is showing is available via cable transmission of a duplicative [distant] signal, the [local] broadcaster will attract a smaller audience, reducing the amount of advertising revenue it can garner.” (Program Access Order, FCC 12-123 at ¶ 62 (2012)) Enforceable broadcast exclusivity agreements are thus necessary for local TV stations to generate the advertising revenue that is necessary for them to survive the government’s mandatory broadcast television business model.

The FCC determined nearly fifty years ago that it is an anticompetitive practice for multichannel video programming distributors (MVPDs) to import distant broadcast signals into local markets that duplicate network and syndicated programming to which local stations have purchased exclusive rights. (See First Exclusivity Order, 38 FCC 683, 703-704 (1965)) Though the video marketplace has changed since 1965, the government’s mandatory broadcast business model is still required by law, and MVPD violations of broadcast exclusivity rights are still anticompetitive.

The FCC adopted broadcast exclusivity procedures to ensure that broadcasters, who are legally prohibited from obtaining direct contractual relationships with viewers or economies of scale, could enjoy the same ability to enforce exclusive programming rights as larger MVPDs. The FCC’s rules are thus designed to “allow all participants in the marketplace to determine, based on their own best business judgment, what degree of programming exclusivity will best allow them to compete in the marketplace and most effectively serve their viewers.” (Broadcast Exclusivity Order at ¶ 125.)

When it adopted the current broadcast exclusivity rules, the FCC concluded that enforcement of broadcast exclusivity agreements was necessary to counteract regulatory restrictions that prevent TV stations from competing directly with MVPDs. Broadcasters suffer the diversion of viewers to duplicative programming on MVPD systems when local TV stations choose to exhibit the most popular programming, because that programming is the most likely to be duplicated. (See Broadcast Exclusivity Order at ¶ 62.) Normally firms suffer their most severe losses when they fail to meet consumer demand, but, in the absence of enforceable broadcast exclusivity agreements, this relationship is reversed for local TV stations: they suffer their most severe losses precisely when they offer the programming that consumers desire most.

The fact that only broadcasters suffer this kind of [viewership] diversion is stark evidence, not of inferior ability to be responsive to viewers’ preferences, but rather of the fact that broadcasters operate under a different set of competitive rules. All programmers face competition from alternative sources of programming. Only broadcasters face, and are powerless to prevent, competition from the programming they themselves offer to viewers. (Id. at ¶ 42.)

The FCC has thus concluded that, if TV stations were unable to enforce exclusive contracts through FCC rules, TV stations would be competitively handicapped compared to MVPDs. (See id. at ¶ 162.)

Regulatory restrictions effectively prevent local TV stations from enforcing broadcast exclusivity agreements through preventative measures and in the courts: (1) prohibitions on subscription television and the use of digital rights management (DRM) prevent broadcasters from protecting their programming from unauthorized retransmission, and (2) stringent ownership limits prevent them from obtaining economies of scale.

Preventative measures may be the most cost effective way to protect digital content rights. Most digital content is distributed with some form of DRM because, as Benjamin Franklin famously said, “an ounce of prevention is worth a pound of cure.” MVPDs, online video distributors, and innumerable Internet companies all use DRM to protect their digital content and services — e.g., cable operators use the CableCard standard to limit distribution of cable programming to their subscribers only.

TV stations are the only video distributors that are legally prohibited from using DRM to control retransmission of their primary programming. The FCC adopted a form of DRM for digital television in 2003 known as the “broadcast flag”, but the DC Circuit Court of Appeals struck it down.

The requirement that TV stations offer their programming “at no direct charge to viewers” effectively prevents them from having direct relationships with end users. TV stations cannot require those who receive their programming over-the-air to agree to any particular terms of service or retransmission limitations through private contract. As a result, TV stations have no way to avail themselves of the types of contractual protections enjoyed by MVPDs who offer services on a subscription basis.

The subscription television and DRM prohibitions have a significant adverse impact on the ability of TV stations to control the retransmission and use of their programming. The Aereo litigation provides a timely example. If TV stations offered their programming on a subscription basis using the CableCard standard, the Aereo “business” model would not exist and the courts would not be tying themselves into knots over potentially conflicting interpretations of the Copyright Act. Because they are legally prohibited from using DRM to prevent companies like Aereo from receiving and retransmitting their programming in the first instance, however, TV stations are forced to rely solely on after-the-fact enforcement to protect their programming rights — i.e., protected and uncertain litigation in multiple jurisdictions.

Localism policies make after-the-fact enforcement particularly cost for local TV stations. The stringent ownership limits that prevent TV stations from obtaining economies of scale have the effect of subjecting TV stations to higher enforcement costs relative to other digital rights holders. In the absence of FCC rules enforcing broadcast exclusivity agreements, family owned TV stations could be forced to defend their rights in court against significantly larger companies who have the incentive and ability to use litigation strategically.

In sum, the FCC’s non-duplication and syndication rules balance broadcast regulatory limitations by providing clear mechanisms for TV stations to communicate their contractual rights to MVPDs, with whom they have no direct relationship, and enforce those rights at the FCC (which is a strong deterrent to the potential for strategic litigation). There is nothing unfair or over-regulatory about FCC enforcement in these circumstances. So why is the FCC asking whether it should eliminate the rules?

]]>
http://techliberation.com/2014/05/19/network-non-duplication-and-syndicated-exclusivity-rules-are-fundamental-to-local-television/feed/ 3
IP Transition Luncheon Briefing on Monday, May 19 http://techliberation.com/2014/05/16/ip-transition-luncheon-briefing-on-monday-may-19/ http://techliberation.com/2014/05/16/ip-transition-luncheon-briefing-on-monday-may-19/#comments Fri, 16 May 2014 17:36:29 +0000 http://techliberation.com/?p=74558

Telephone companies have already begun transitioning their networks to Internet Protocol. This could save billions while improving service for consumers and promoting faster broadband, but has raised a host of policy and legal questions. How can we ensure the switch is as smooth and successful as possible? What legal authority do the FCC and other agencies have over the IP Transition and how should they use it?

Join TechFreedom on Monday, May 19, at its Capitol Hill office for a lunch event to discuss this and more with top experts from the field. Two short technical presentations will set the stage for a panel of legal and policy experts, including:

  • Jodie Griffin, Senior Staff Attorney, Public Knowledge
  • Hank Hultquist, VP of Federal Regulatory, AT&T
  • Berin Szoka, President, TechFreedom
  • Christopher Yoo, Professor, University of Pennsylvania School of Law
  • David Young, VP of Federal Regulatory Affairs, Verizon

The panel will be livestreamed (available here). Join the conversation on Twitter with the #IPTransition hashtag.

When:
Monday, May 19, 2014
11:30am – 12:00pm — Lunch and registration
12:00pm – 12:20pm — Technical presentations by AT&T and Verizon
12:20pm – 2:00 pm — Panel on legal and policy issues, audience Q&A

Where:
United Methodist Building, Rooms 1 & 2
100 Maryland Avenue NE
Washington, DC 20002

RSVP today!

Questions?
Email mail@techfreedom.org.

]]>
http://techliberation.com/2014/05/16/ip-transition-luncheon-briefing-on-monday-may-19/feed/ 0
Why Reclassification Would Make the Internet Less Open http://techliberation.com/2014/05/15/why-reclassification-would-make-the-internet-less-open/ http://techliberation.com/2014/05/15/why-reclassification-would-make-the-internet-less-open/#comments Thu, 15 May 2014 14:58:19 +0000 http://techliberation.com/?p=74555

There seems to be increasing chatter among net neutrality activists lately on the subject of reclassifying ISPs as Title II services, subject to common carriage regulation. Although the intent in pushing reclassification is to make the Internet more open and free, in reality such a move could backfire badly. Activists don’t seem to have considered the effect of reclassification on international Internet politics, where it would likely give enemies of Internet openness everything they have always wanted.

At the WCIT in 2012, one of the major issues up for debate was whether the revised International Telecommunication Regulations (ITRs) would apply to Operating Agencies (OAs) or to Recognized Operating Agencies (ROAs). OA is a very broad term that covers private network operators, leased line networks, and even ham radio operators. Since “OA” would have included IP service providers, the US and other more liberal countries were very much opposed to the application of the ITRs to OAs. ROAs, on the other hand, are OAs that operate “public correspondence or broadcasting service.” That first term, “public correspondence,” is a term of art that means basically common carriage. The US government was OK with the use of ROA in the treaty because it would have essentially cabined the regulations to international telephone service, leaving the Internet free from UN interference. What actually happened was that there was a failed compromise in which ITU Member States created a new term, Authorized Operating Agency, that was arguably somewhere in the middle—the definition included the word “public” but not “public correspondence”—and the US and other countries refused to sign the treaty out of concern that it was still too broad.

If the US reclassified ISPs as Title II services, that would arguably make them ROAs for purposes at the ITU (arguably because it depends on how you read the definition of ROA and Article 6 of the ITU Constitution). This potentially opens ISPs up to regulation under the ITRs. This might not be so bad if the US were the only country in the world—after all, the US did not sign the 2012 ITRs, and it does not use the ITU’s accounting rate provisions to govern international telecom payments.

But what happens when other countries start copying the US, imposing common carriage requirements, and classifying their ISPs as ROAs? Then the story gets much worse. Countries that are signatories to the 2012 ITRs would have ITU mandates on security and spam imposed on their networks, which is to say that the UN would start essentially regulating content on the Internet. This is what Russia, Saudia Arabia, and China have always wanted. Furthermore (and perhaps more frighteningly), classification as ROAs would allow foreign ISPs to forgo commercial peering arrangements in favor of the ITU’s accounting rate system. This is what a number of African governments have always wanted. Ethiopia, for example, considered a bill (I’m not 100 percent sure it ever passed) that would send its own citizens to jail for 15 years for using VOIP, because this decreases Ethiopian international telecom revenues. Having the option of using the ITU accounting rate system would make it easier to extract revenues from international Internet use.

Whatever you think of, e.g., Comcast and Cogent’s peering dispute, applying ITU regulation to ISPs would be significantly worse in terms of keeping the Internet open. By reclassifying US ISPs as common carriers, we would open the door to exactly that. The US government has never objected to ITU regulation of ROAs, so if we ever create a norm under which ISPs are arguably ROAs, we would be essentially undoing all of the progress that we made at the WCIT in standing up for a distinction between old-school telecom and the Internet. I imagine that some net neutrality advocates will find this unfair—after all, their goal is openness, not ITU control over IP service. But this is the reality of international politics: the US would have a very hard time at the ITU arguing that regulating for neutrality and common carriage is OK, but regulating for security, content, and payment is not.

If the goal is to keep the Internet open, we must look somewhere besides Title II.

]]>
http://techliberation.com/2014/05/15/why-reclassification-would-make-the-internet-less-open/feed/ 1
Adam Thierer on Permissionless Innovation http://techliberation.com/2014/05/13/thierer/ http://techliberation.com/2014/05/13/thierer/#comments Tue, 13 May 2014 10:00:30 +0000 http://techliberation.com/?p=74547 Post image for Adam Thierer on Permissionless Innovation

Adam Thierer, senior research fellow with the Technology Policy Program at the Mercatus Center at George Mason University, discusses his latest book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Thierer discusses which types of policies promote technological discoveries as well as those that stifle the freedom to innovate. He also takes a look at new technologies — such as driverless cars, drones, big data, smartphone apps, and Google Glass — and how the American public will adapt to them.

Download

Related Links

]]>
http://techliberation.com/2014/05/13/thierer/feed/ 0
In Defense of Broadband Fast Lanes http://techliberation.com/2014/05/12/in-defense-of-broadband-fast-lanes/ http://techliberation.com/2014/05/12/in-defense-of-broadband-fast-lanes/#comments Mon, 12 May 2014 17:08:06 +0000 http://techliberation.com/?p=74530

The outrage over the FCC’s attempt to write new open Internet rules has caught many by surprise, and probably Chairman Wheeler as well. The rumored possibility of the FCC authorizing broadband “fast lanes” draws most complaints and animus. Gus Hurwitz points out that the FCC’s actions this week have nothing to do with fast lanes and Larry Downes reminds us that this week’s rules don’t authorize anything. There’s a tremendous amount of misinformation because few understand how administrative law works. Yet many net neutrality proponents fear the worst from the proposed rules because Wheeler takes the consensus position that broadband provision is a two-sided market and prioritized traffic could be pro-consumer.

Fast lanes have been permitted by the FCC for years and they can benefit consumers. Some broadband services–like video and voice over Internet protocol (VoIP)–need to be transmitted faster or with better quality than static webpages, email, and file syncs. Don’t take my word for it. The 2010 Open Internet NPRM, which led to the recently struck-down rules, stated,

As rapid innovation in Internet-related services continues, we recognize that there are and will continue to be Internet-Protocol-based offerings (including voice and subscription video services, and certain business services provided to enterprise customers), often provided over the same networks used for broadband Internet access service, that have not been classified by the Commission. We use the term “managed” or “specialized” services to describe these types of offerings. The existence of these services may provide consumer benefits, including greater competition among voice and subscription video providers, and may lead to increased deployment of broadband networks.

I have no special knowledge about what ISPs will or won’t do. I wouldn’t predict in the short term the widespread development of prioritized traffic under even minimal regulation. I think the carriers haven’t looked too closely at additional services because net neutrality regulations have precariously hung over them for a decade. But some of net neutrality proponents’ talking points (like insinuating or predicting ISPs will block political speech they disagree with) are not based in reality.

We run a serious risk of derailing research and development into broadband services if the FCC is cowed by uninformed and extreme net neutrality views. As Adam eloquently said, “Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about.” Many net neutrality proponents would like to smear all priority traffic as unjust and exploitative. This is unfortunate and a bit ironic because one of the most transformative communications developments, cable VoIP, is a prioritized IP service.

There are other IP services that are only economically feasible if jitter, latency, and slow speed are minimized. Prioritized traffic takes several forms, but it could enhance these services:

VoIP. This prioritized service has actually been around for several years and has completely revolutionized the phone industry. Something unthinkable for decades–facilities-based local telephone service–became commonplace in the last few years and undermined much of the careful industrial planning in the 1996 Telecom Act. If you subscribe to voice service from your cable provider, you are benefiting from fast lane treatment. Your “phone” service is carried over your broadband cable, segregated from your television and Internet streams. Smaller ISPs could conceivably make their phone service more attractive by pairing up with a Skype- or Vonage-type voice provider, and there are other possibilities that make local phone service more competitive.

Cloud-hosted virtual desktops. This is not a new idea, but it’s possible to have most or all of your computing done in a secure cloud, not on your PC, via a prioritized data stream. With a virtual desktop, your laptop or desktop PC functions mainly as a dumb portal. No more annoying software updates. Fewer security risks. IT and security departments everywhere would rejoice. Google Chromebooks are a stripped-down version of this but truly functional virtual desktops would be valued by corporations, reporters, or government agencies that don’t want sensitive data saved on a bunch of laptops in their organization that they can’t constantly monitor. Virtual desktops could also transform the device market, putting the focus on a great cloud and (priority) broadband service and less on the power and speed of the device. Unfortunately, at present, virtual desktops are not in widespread use because even small lag frustrates users.

TV. The future of TV is IP-based and the distinction between “TV” and “the Internet” is increasingly blurring, with Netflix leading the way. In a fast lane future, you could imagine ISPs launching pared-down TV bundles–say, Netflix, HBO Go, and some sports channels–over a broadband connection. Most ISPs wouldn’t do it, but an over-the-top package might interest smaller ISPs who find acquiring TV content and bundling their own cable packages time-consuming and expensive.

Gaming. Computer gamers hate jitter and latency. (My experience with a roommate who had unprintable outbursts when Diablo III or World of Warcraft lagged is not uncommon.) Game lag means you die quite frequently because of your data connection and this depresses your interest in a game. There might be gaming companies out there who would like to partner with ISPs and other network operators to ensure smooth gameplay. Priority gaming services could also lead the way to more realistic, beautiful, and graphics-intensive games.

Teleconferencing, telemedicine, teleteaching, etc. Any real-time, video-based service could reach critical mass of subscribers and become economical with priority treatment. Any lag absolutely kills consumer interest in these video-based applications. By favoring applications like telemedicine, providing remote services could become attractive to enough people for ISPS to offer stand-alone broadband products.

This is just a sampling of the possible consumer benefits of pay-for-priority IP services we possibly sacrifice in the name of strict neutrality enforcement. There are other services we can’t even conceive of yet that will never develop. Generally, net neutrality proponents don’t admit these possible benefits and are trying to poison the well against all priority deals, including many of these services.

Most troubling, net neutrality turns the regulatory process on its head. Rather than identify a market failure and then take steps to correct the failure, the FCC may prevent commercial agreements that would be unobjectionable in nearly any other industry. The FCC has many experts who are familiar with the possible benefits of broadband fast lanes, which is why the FCC has consistently blessed priority treatment in some circumstances.

Unfortunately, the orchestrated reaction in recent weeks might leave us with onerous rules, delaying or making impossible new broadband services. Hopefully, in the ensuing months, reason wins out and FCC staff are persuaded by competitive analysis and possible innovations, not t-shirt slogans.

]]>
http://techliberation.com/2014/05/12/in-defense-of-broadband-fast-lanes/feed/ 12
Technology Policy: A Look Ahead http://techliberation.com/2014/05/12/technology-policy-a-look-ahead/ http://techliberation.com/2014/05/12/technology-policy-a-look-ahead/#comments Mon, 12 May 2014 14:20:22 +0000 http://techliberation.com/?p=74527

This article was written by Adam Thierer, Jerry Brito, and Eli Dourado.

For the three of us, like most others in the field today, covering “technology policy” in Washington has traditionally been synonymous with covering communications and information technology issues, even though “tech policy” has actually always included policy relevant to a much wider array of goods, services, professions, and industries.

That’s changing, however. Day by day, the world of “technology policy” is evolving and expanding to incorporate much, much more. The same forces that have powered the information age revolution are now transforming countless other fields and laying waste to older sectors, technologies, and business models in the process. As Marc Andreessen noted in a widely-read 2011 essay, “Why Software Is Eating The World”:

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Why is this happening now? Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.

More specifically, many of the underlying drivers of the digital revolution—massive increases in processing power, exploding storage capacity, steady miniaturization of computing, ubiquitous communications and networking capabilities, the digitization of all data, and increasing decentralization and disintermediation—are beginning to have a profound impact beyond the confines of cyberspace.

The pace of this disruptive change is only going to accelerate and come to touch more and more industries. As it does, the public policy battles will also evolve and expand, and so, too, will our understanding of what “tech policy” includes.

That’s why the Mercatus Center Technology Policy Program continues to expand its issue coverage to include more scholarship on a wide array of emerging technologies and sectors. What we’re finding is that everything old is new again. The very same policy debates over privacy, safety, security, and IP that have dominated information policy are expanding to new frontiers. For example, when we started our program five years ago, we never thought we would be filing public interest comments on privacy issues with the Federal Aviation Administration, but that’s where we found ourselves last year as we advocated for permissionless innovation in the emerging domestic drone space. We now expect that we will soon find ourselves making a similar case at the Food and Drug Administration, the Department of Transportation, and many other regulatory bodies in the near future.

In many ways, we’re still just talking about information policy, it’s just that increasing number of devices and sensors are being connected to the Internet. In other ways, however, it is in fact about more than simply an expanded conception of information policy. It’s about bringing the benefits of a permissionless innovation paradigm, which has worked so well in the Internet space, to sectors now controlled by prescriptive and precautionary regulation. As software continues to eat the world, innovation is increasingly butting up against outmoded and protectionist barriers to entry. Most will agree that the Internet has been the success it is because to launch a new product or service, you don’t have to ask anyone’s permission; we only need contract and tort law, and smart guard rails like safe harbors, to protect our rights. Yet if you want to offer a software-driven car service, you have to get permission from government first. If you want to offer genome sequencing and interpretation, you have to get permission first.

Maybe it’s time for a change. As Wall Street Journal columnist L. Gordon Crovitz argues, “The regulation the digital economy needs most now is for permissionless innovation to become the default law of the land, not the exception.” As a result, we’ll see this debate between “permissionless innovation” and the “precautionary principle” play out for a wide variety of new innovations such as the so-called “Internet of Things” and “wearable technologies,” but also with smart car technology, commercial drones, robotics, 3D printing, and many other new devices and services that are just now emerging. A recent New York Times survey, “A Vision of the Future From Those Likely to Invent It,” highlights additional innovation opportunities where this tension will exist.

The evolution of our work is also driven by the accelerating trend of decentralization and disintermediation, which could potentially make precautionary regulation too costly to undertake. When governments want to control information, they invariably head for the choke points, like payment processors or ISP-run DNS servers. This fact has created a demand for technologies that bypass such intermediaries. As mediated systems are increasingly replaced by decentralized peer-to-peer alternatives, there will be fewer points for control for governments to leverage. The result may be that any enforcement may have to target end users directly, which would not only increases the direct costs of enforcement, but also the political ones.

So in the coming months, you can expect to see Mercatus produce research on the economics of intellectual property, broadband investment, and spectrum policy, but also research on autonomous vehicles, wearable technologies, cryptocurrencies, and barriers to medical innovation. The future is looking brighter than ever, and we are excited to make permissionless innovation the default in that future.

Further Reading:

]]>
http://techliberation.com/2014/05/12/technology-policy-a-look-ahead/feed/ 0
The Gravest Threat To The Internet http://techliberation.com/2014/05/11/the-gravest-threat-to-the-internet/ http://techliberation.com/2014/05/11/the-gravest-threat-to-the-internet/#comments Mon, 12 May 2014 03:35:57 +0000 http://techliberation.com/?p=74522

Allowing broadband providers to impose tolls on Internet companies represents a “grave” threat to the Internet, or so wrote several Internet giants and their allies in a letter to the Federal Communications Commission this past week.

The reality is that broadband networks are very expensive to build and maintain.  Broadband companies have invested approximately $250 billion in U.S. wired and wireless broadband networks—and have doubled average delivered broadband speeds—just since President Obama took office in early 2009.  Nevertheless, some critics claim that American broadband is still too slow and expensive.

The current broadband pricing model is designed to recover the entire cost of maintaining and improving the network from consumers.  Internet companies get free access to broadband subscribers.

Although the broadband companies are not poised to experiment with different pricing models at this time, the Internet giants and their allies are mobilizing against the hypothetical possibility that they might in the future.  But this is not the gravest threat to the Internet.  Broadband is a “multisided” market like newspapers.  Newspapers have two sets of customers—advertisers and readers—and both “pay to play.”  Advertisers pay different rates depending on how much space their ads take up and on where the ads appear in the newspaper.  And advertisers underwrite much of the cost of producing newspapers.

Or perhaps broadband providers might follow the longstanding practice of airlines that charge more than one price on the same flight.  In the early days of air travel, passengers only had a choice of first class.  The introduction of discounted coach fares made it affordable for many more people to fly, and generated revenue to pay for vastly expanded air service.

Broadband companies voluntarily invest approximately $65 billion per year because they fundamentally believe that more capacity and lower prices will expand their markets.  “Foreign” devices, content and applications are consistent with this vision because they stimulate demand for broadband.

The Internet giants and their allies oppose “paid prioritization” in particular.  But this is like saying the U.S. Postal Service shouldn’t be able to offer Priority or Express mail.

One of the dangers in cementing the current pricing model in regulation under the banner of preserving the open Internet is that of prohibiting alternative pricing strategies that could yield lower prices and better service for consumers.

FCC Chairman Tom Wheeler intends for his agency to begin a rulemaking proceeding this week on the appropriate regulatory treatment of broadband.  Earlier this month in Los Angeles, Wheeler said the FCC will be asking for input as to whether it should fire up “Title II.”

Wheeler was referring to a well-known section of the Communications Act of 1934 centered around pricing regulation that buttressed the Bell System monopoly and gave birth to the regulatory morass that afflicted telecom for decades.  A similar version of suffocating regulation was imposed on the cable companies in 1992 in a quixotic attempt to promote competition and secure lower prices for consumers.

Then, as now, cable and telephone companies were criticized for high prices, sub-par service and/or failing to be more innovative.  And regulation didn’t help.  There was widespread agreement that other deregulated industries were outperforming the highly-regulated cable and telecom companies.

By 1996, Congress overwhelmingly deemed it necessary to unwind regulation of both cable and telephone firms “in order to secure lower prices and higher quality services for American telecommunications  consumers and encourage the rapid deployment of  new telecommunications technologies.”

With this history as a guide, it is safe to assume not only that the mere threat of a new round of price regulation could have a chilling effect on the massive private investment that is still likely to be needed for expanding bandwidth to meet surging demand, and that enactment of such regulation could be a disaster.

Diminished investment is the gravest threat to the Internet, because reduced investment could lead to higher costs, congestion, higher prices and fewer opportunities for makers of devices, content and applications to practice innovation.

 

]]>
http://techliberation.com/2014/05/11/the-gravest-threat-to-the-internet/feed/ 0
Killing TV Stations Is the Intended Consequence of Video Regulation Reform http://techliberation.com/2014/05/08/killing-tv-stations-is-the-intended-consequence-of-video-regulation-reform/ http://techliberation.com/2014/05/08/killing-tv-stations-is-the-intended-consequence-of-video-regulation-reform/#comments Thu, 08 May 2014 13:22:08 +0000 http://techliberation.com/?p=74518

Today is a big day in Congress for the cable and satellite (MVPDs) war on broadcast television stations. The House Judiciary Committee is holding a hearing on the compulsory licenses for broadcast television programming in the Copyright Act, and the House Energy and Commerce Committee is voting on a bill to reauthorize “STELA” (the compulsory copyright license for the retransmission of distant broadcast signals by satellite operators). The STELA license is set to expire at the end of the year unless Congress reauthorizes it, and MVPDs see the potential for Congressional action as an opportunity for broadcast television to meet its Waterloo. They desire a decisive end to the compulsory copyright licenses, the retransmission consent provision in the Communications Act, and the FCC’s broadcast exclusivity rules — which would also be the end of local television stations.

The MVPD industry’s ostensible motivations for going to war are retransmission consent fees and television “blackouts”, but the real motive is advertising revenue.

The compulsory copyright licenses prevent MVPDs from inserting their own ads into broadcast programming streams, and the retransmission consent provision and broadcast exclusivity agreements prevent them from negotiating directly with the broadcast networks for a portion of their available advertising time. If these provisions were eliminated, MVPDs could negotiate directly with broadcast networks for access to their television programming and appropriate TV station advertising revenue for themselves.

The real motivation is in the numbers. According to the FCC’s most recent media competition report, MVPDs paid a total of approximately $2.4 billion in retransmission consent fees in 2012. (See 15th Report, Table 19) In comparison, TV stations generated approximately $21.3 billion in advertising that year. Which is more believable: (1) That paying $2.4 billion in retransmission consent fees is “just not sustainable” for an MVPD industry that generated nearly $149 billion from video services in 2011 (See 15th Report, Table 9), or (2) That MVPDs want to appropriate $21.3 billion in additional advertising revenue by cutting out the “TV station middleman” and negotiating directly for television programming and advertising time with national broadcast networks? (Hint: The answer is behind door number 2.)

What do compulsory copyright licenses, retransmission consent, and broadcast exclusivity agreements have to do with video advertising revenue?

  • The compulsory copyright licenses prohibit MVPDs substituting their own advertisements for TV station ads: Retransmission of a broadcast television signal by an MVPD is “actionable as an act of infringement” if the content of the signal, including “any commercial advertising,” is “in any way willfully altered by the cable system through changes, deletions, or additions” (see 17 U.S.C. § 111(c)(3)119(a)(5), and 122(e));
  • The retransmission consent provision prohibits MVPDs from negotiating directly with television broadcast networks for access to their programming or a share of their available advertising time: An MVPD cannot retransmit a local commercial broadcast television signal without the “express authority of the originating station” (see 47 U.S.C. § 325(b)(1)(A)); and
  • Broadcast exclusivity agreements (also known as non-duplication and syndicated exclusivity agreements) prevent MVPDs from circumventing the retransmission consent provision by negotiating for nationwide retransmission consent with one network-affiliated own-and-operated TV station. (If an MVPD were able to retransmit the TV signals from only one television market nationwide, MVPDs could, in effect, negotiate with broadcast networks directly, because broadcast programming networks own and operate their own TV stations in some markets.)

The effect of the compulsory copyright licenses, retransmission consent provision, and broadcast exclusivity agreements is to prevent MVPDs from realizing any of the approximately $20 billion in advertising revenue generating by broadcast television programming every year.

Why did Congress want to prevent MVPDs from realizing any advertising revenue from broadcast television programming?

Congress protected the advertising revenue of local TV stations because TV stations are legally prohibited from realizing any subscription revenue for their primary programming signal. (See 47 U.S.C. § 336(b)) Congress chose to balance the burden of the broadcast business model mandate with the benefits of protecting their advertising revenue. The law forces TV stations to rely primarily on advertising revenue to generate profits, but the law also protects their ability to generate advertising revenue. Conversely, the law allows MVPDs to generate both subscription revenue and advertising revenue for their own programming, but prohibits them from poaching advertising revenue from broadcast programming.

MVPDs want to upset the balance by repealing the regulations that make free over-the-air television possible without repealing the regulations that require TV stations to provide free over-the-air programming. Eliminating only the regulations that benefit broadcasters while retaining their regulatory burdens is not a free market approach — it is a video marketplace firing squad aimed squarely at the heart of TV stations.

Adopting the MVPD version of video regulation reform would not kill broadcast programming networks. They always have the options of becoming cable networks and selling their programming and advertising time directly to MVPDs or distributing their content themselves directly over the Internet.

The casualty of this so-called “reform” effort would be local TV stations, who are required by law to rely on advertising and retransmission consent fees for their survival. Policymakers should recognize that killing local TV stations for their advertising revenue is the ultimate goal of current video reform efforts before adopting piecemeal changes to the law. If policymakers intend to kill TV stations, they should not attribute the resulting execution to the “friendly fire” of unintended consequences. They should recognize the legitimate consumer and investment-backed expectations created by the current statutory framework and consider appropriate transition mechanisms after a comprehensive review.

]]>
http://techliberation.com/2014/05/08/killing-tv-stations-is-the-intended-consequence-of-video-regulation-reform/feed/ 0
Crovitz on The End of the Permissionless Web http://techliberation.com/2014/05/07/crovitz-on-the-end-of-the-permissionless-web/ http://techliberation.com/2014/05/07/crovitz-on-the-end-of-the-permissionless-web/#comments Thu, 08 May 2014 03:00:02 +0000 http://techliberation.com/?p=74508

Few people have been more tireless in their defense of the notion of “permissionless innovation” than Wall Street Journal columnist L. Gordon Crovitz. In his weekly “Information Age” column for the Journal (which appears each Monday), Crovitz has consistently sounded the alarm regarding new threats to Internet freedom, technological freedom, and individual liberties. It was, therefore, a great honor for me to wake up Monday morning and read his latest post, “The End of the Permissionless Web,” which discussed my new book “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.”

“The first generation of the Internet did not go well for regulators,” Crovitz begins his column. “Despite early proposals to register websites and require government approval for business practices, the Internet in the U.S. developed largely without bureaucratic control and became an unstoppable engine of innovation and economic growth.” Unfortunately, he correctly notes:

Regulators don’t plan to make the same mistake with the next generation of innovations. Bureaucrats and prosecutors are moving in to undermine services that use the Internet in new ways to offer everything from getting a taxi to using self-driving cars to finding a place to stay.

This is exactly why I penned my little manifesto. As Crovitz continues on to note in his essay, new regulatory threats to both existing and emerging technologies are popping up on almost a daily basis. He highlights currently battles over Uber, Airbnb, 23andme, commercial drones, and more. And his previous columns have discussed many other efforts to “permission” innovation and force heavy-handed top-down regulatory schemes on fast-paced and rapidly-evolving sectors and technologies. As he argues:

The hardest thing for government regulators to do is to regulate less, which is why the development of the open-innovation Internet was a rare achievement. The regulation the digital economy needs most now is for permissionless innovation to become the default law of the land, not the exception.

Amen, brother! What we need to do is find more constructive ways to deal with some of the fears that motivate calls for regulation. But, as I noted in my little book, how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. I get very specific about those approaches in Chapter 5 of my book, which is entitled, “Preserving Permissionless Innovation: Principles of Progress.”

So, I hope you’ll download a free copy of the book and take a look. And my sincerest thanks to Gordon Crovitz for featuring it in his excellent new column.

____________________________________

Additional Reading:

]]>
http://techliberation.com/2014/05/07/crovitz-on-the-end-of-the-permissionless-web/feed/ 0
Skorup and Thierer paper on TV Regulation http://techliberation.com/2014/05/05/skorup-and-thierer-paper-on-tv-regulation/ http://techliberation.com/2014/05/05/skorup-and-thierer-paper-on-tv-regulation/#comments Mon, 05 May 2014 17:24:22 +0000 http://techliberation.com/?p=74501

Adam and I recently published a Mercatus research paper titled Video Marketplace Regulation: A Primer on the History of Television Regulation And Current Legislative Proposals, now available on SSRN. I presented the paper at a Silicon Flatirons academic conference last week.

We wrote the paper for a policy audience and students who want succinct information and history about the complex world of television regulation. Television programming is delivered to consumers in several ways, including via cable, satellite, broadcast, IPTV (like Verizon FiOS), and, increasingly, over-the-top broadband services (like Netflix and Amazon Instant Video). Despite their obvious similarities–transmitting movies and shows to a screen–each distribution platform is regulated differently.

The television industry is in the news frequently because of problems exacerbated by the disparate regulatory treatment. The Time Warner Cable-CBS dispute last fall (and TWC’s ensuing loss of customers), the Aereo lawsuit, and the Comcast-TWC proposed merger were each caused at least indirectly by some of the ill-conceived and antiquated TV regulations we describe. Further, TV regulation is a “thicket of regulations,” as the Copyright Office has said, which benefits industry insiders at the expense of most everyone else.

We contend that overregulation of television resulted primarily because past FCCs, and Congress to a lesser extent, wanted to promote several social objectives through a nationwide system of local broadcasters:

1) Localism
2) Universal Service
3) Free (that is, ad-based) television; and
4) Competition

These objectives can’t be accomplished simultaneously without substantial regulatory mandates. Further, these social goals may even contradict each other in some respects.

For decades, public policies constrained TV competitors to accomplish those goals. We recommend instead a reliance on markets and consumer choice through comprehensive reform of television laws, including repeal of compulsory copyright laws, must-carry, retransmission consent, and media concentration rules.

At the very least, our historical review of TV regulations provides an illustrative case study of how regulations accumulate haphazardly over time, demand additional “correction,” and damage dynamic industries. Congress and the FCC focused on attaining particular competitive outcomes through industrial policy, unfortunately. Our paper provides support for market-based competition and regulations that put consumer choice at the forefront.

]]>
http://techliberation.com/2014/05/05/skorup-and-thierer-paper-on-tv-regulation/feed/ 0
Book event on Wednesday: A libertarian vision of copyright http://techliberation.com/2014/05/05/book-event-on-wednesday-a-libertarian-vision-of-copyright/ http://techliberation.com/2014/05/05/book-event-on-wednesday-a-libertarian-vision-of-copyright/#comments Mon, 05 May 2014 15:07:01 +0000 http://techliberation.com/?p=74496

Bell-3D-cover-webLast week, the Mercatus Center at George Mason University published the new book by Tom W. Bell, Intellectual Privilege: Copyright, Common Law, and the Common Good, which Eugene Volokh calls “A fascinating, highly readable, and original look at copyright[.]” Richard Epstein says that Bell’s book “makes a distinctive contribution to a field in which fundamental political theory too often takes a back seat to more overt utilitarian calculations.” Some key takeaways from the book:

  • If copyright were really property, like a house or cell phone, most Americans would belong in jail. That nobody seriously thinks infringement should be fully enforced demonstrates that copyright is not property and that copyright policy is broken.
  • Under the Founders’ Copyright, as set forth in the 1790 Copyright Act, works could be protected for a maximum of 28 years. Under present law, they can be extended to 120 years. The massive growth of intellectual privilege serves big corporate publishers to the detriment of individual authors and artist.
  • By discriminating against unoriginal speech, copyright sharply limits our freedoms of expression.
    We should return to the wisdom of the Founders and regard copyrights as special privileges narrowly crafted to serve the common good.

This week, on Wednesday, May 7, at noon, the Cato Institute will hold a book forum featuring Bell, and comments by Christopher Newman, Assistant Professor, George Mason University School of Law. It’s going to be a terrific event and you should come. Please make sure to RSVP.

]]>
http://techliberation.com/2014/05/05/book-event-on-wednesday-a-libertarian-vision-of-copyright/feed/ 0
FCC Incentive Auction Plan Won’t Benefit Rural America http://techliberation.com/2014/05/05/fcc-incentive-auction-plan-wont-benefit-rural-america/ http://techliberation.com/2014/05/05/fcc-incentive-auction-plan-wont-benefit-rural-america/#comments Mon, 05 May 2014 14:31:24 +0000 http://techliberation.com/?p=74492

The FCC is set to vote later this month on rules for the incentive auction of spectrum licenses in the broadcast television band. These licenses would ordinarily be won by the highest bidders, but not in this auction. The FCC plans to ensure that Sprint and T-Mobile win licenses in the incentive auction even if they aren’t willing to pay the highest price, because it believes that Sprint and T-Mobile will expand their networks to cover rural areas if it sells them licenses at a substantial discount.

This theory is fundamentally flawed. Sprint and T-Mobile won’t substantially expand their footprints into rural areas even if the FCC were to give them spectrum licenses for free. There simply isn’t enough additional revenue potential in rural areas to justify covering them with four or more networks no matter what spectrum is used or how much it costs. It is far more likely that Sprint and T-Mobile will focus their efforts on more profitable urban areas while continuing to rely on FCC roaming rights to use networks built by other carriers in rural areas.

The television band spectrum the FCC plans to auction is at relatively low frequencies that are capable of covering larger areas at lower costs than higher frequency mobile spectrum, which makes the spectrum particularly useful in rural areas. The FCC theorizes that, if Sprint and T-Mobile could obtain additional low frequency spectrum with a substantial government discount, they will pass that discount on to consumers by expanding their wireless coverage in rural areas.

The flaw in this theory is that it considers costs without considering revenue. Sprint and T-Mobile won’t expand coverage in rural areas unless the potential for additional revenue exceeds the costs of providing rural coverage.

study authored by Anna-Maria Kovacs, a scholar at Georgetown University, demonstrates that the potential revenue in rural areas is insufficient to justify substantial rural deployment by Sprint and T-Mobile even at lower frequencies. The study concludes that the revenue potential per square mile in areas that are currently covered by 4 wireless carriers is $41,832. The potential revenue drops to $13,632 per square mile in areas covered by 3 carriers and to $6,219 in areas covered by 2 carriers. The potential revenue in areas covered by 4 carriers is thus approximately 3.5 times greater than in areas covered by 3 carriers and nearly 8 times greater than in areas covered by 2 carriers. It is unlikely that propagation differences between even the lowest and the highest frequency mobile spectrum could reduce costs by a factor greater than three due to path loss and barriers to optimal antenna placement.

Even assuming the low frequency spectrum could lower costs by a factor greater than three, the revenue data in the Kovacs report indicates that additional low frequency spectrum would, at best, support only 1 additional carrier in areas currently covered by 3 carriers. Low frequency spectrum wouldn’t support even one additional carrier in areas that are already covered by 1 or 2 carriers: It would be uneconomic for additional carriers to deploy in those areas at any frequency.

The challenging economics of rural wireless coverage are the primary reason the FCC gave Sprint and T-Mobile a roaming right to use the wireless networks built by Verizon and AT&T even in areas where Sprint and T-Mobile already hold low frequency spectrum.

When the FCC created the automatic roaming right, it exempted carriers from the duty to provide roaming in markets where the requesting carrier already has spectrum rights. (2007 Roaming Order at ¶ 48) The FCC found that, “if a carrier is allowed to ‘piggy-back’ on the network coverage of a competing carrier in the same market, then both carriers lose the incentive to buildout into high cost areas in order to achieve superior network coverage.” (Id. at ¶ 49). The FCC subsequently repealed this spectrum exemption at the urging of Sprint and T-Mobile, because “building another network may be economically infeasible or unrealistic in some geographic portions of [their] licensed service areas.” (2010 Roaming Order at ¶ 23)

As a result, Sprint and T-Mobile have chosen to rely primarily on roaming agreements to provide service in rural areas, because it is cheaper than building their own networks. The most notorious example is Sprint, who actually reduced its rural coverage to cut costs after the FCC eliminated the spectrum exemption to the automatic roaming right. This decision was not driven by Sprint’s lack of access to low frequency spectrum — Sprint has held low frequency spectrum on a nationwide basis for years.

The limited revenue potential offered by rural areas and the superior economic alternative to rural deployment provided by FCC’s automatic roaming right indicate that Sprint and T-Mobile won’t expand their rural footprints at any frequency. Ensuring that Sprint and T-Mobile win low frequency spectrum at a substantial government discount would benefit their bottom lines, but it won’t benefit rural Americans.

]]>
http://techliberation.com/2014/05/05/fcc-incentive-auction-plan-wont-benefit-rural-america/feed/ 0
What Vox Doesn’t Get About the “Battle for the Future of the Internet” http://techliberation.com/2014/05/02/what-vox-doesnt-get-about-the-battle-for-the-future-of-the-internet/ http://techliberation.com/2014/05/02/what-vox-doesnt-get-about-the-battle-for-the-future-of-the-internet/#comments Fri, 02 May 2014 18:56:31 +0000 http://techliberation.com/?p=74487

My friend Tim Lee has an article at Vox that argues that interconnection is the new frontier on which the battle for the future of the Internet is being waged. I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless.

How the Internet used to work

The Internet is a network of networks. Your ISP is a network. It connects to the other ISPs and exchanges traffic with them. Since connections between ISPs are about equally valuable to each other, this often happens through “settlement-free peering,” in which networks exchange traffic on an unpriced basis. The arrangement is equally valuable to both partners.

Not every ISP connects directly to every other ISP. For example, a local ISP in California probably doesn’t connect directly to a local ISP in New York. If you’re an ISP that wants to be sure your customer can reach every other network on the Internet, you have to purchase “transit” services from a bigger or more specialized ISP. This would allow ISPs to transmit data along what used to be called “the backbone” of the Internet. Transit providers that exchange roughly equally valued traffic with other networks themselves have settlement-free peering arrangements with those networks.

How the Internet works now

A few things have changed in the last several years. One major change is that most major ISPs have very large, geographically-dispersed networks. For example, Comcast serves customers in 40 states, and other networks can peer with them in 18 different locations across the US. These 18 locations are connected to each other through very fast cables that Comcast owns. In other words, Comcast is not just a residential ISP anymore. They are part of what used to be called “the backbone,” although it no longer makes sense to call it that since there are so many big pipes that cross the country and so much traffic is transmitted directly through ISP interconnection.

Another thing that has changed is that content providers are increasingly delivering a lot of a) traffic-intensive and b) time-sensitive content across the Internet. This has created the incentive to use what are known as content-delivery networks (CDNs). CDNs are specialized ISPs that locate servers right on the edge of all terminating ISPs’ networks. There are a lot of CDNs—here is one list.

By locating on the edge of each consumer ISP, CDNs are able to deliver content to end users with very low latency and at very fast speeds. For this service, they charge money to their customers. However, they also have to pay consumer ISPs for access to their networks, because the traffic flow is all going in one direction and otherwise CDNs would be making money by using up resources on the consumer ISP’s network.

CDNs’ payments to consumer ISPs are also a matter of equity between the ISP’s customers. Let’s suppose that Vox hires Amazon CloudFront to serve traffic to Comcast customers (they do). If the 50 percent of Comcast customers who wanted to read Vox suddenly started using up so many network resources that Comcast and CloudFront needed to upgrade their connection, who should pay for the upgrade? The naïve answer is to say that Comcast should, because that is what customers are paying them for. But the efficient answer is that the 50 percent who want to access Vox should pay for it, and the 50 percent who don’t want to access it shouldn’t. By Comcast charging CloudFront to access the Comcast network, and CloudFront passing along those costs to Vox, and Vox passing along those costs to customers in the form of advertising, the resource costs of using the network are being paid by those who are using them and not by those who aren’t.

What happened with the Netflix/Comcast dust-up?

Netflix used multiple CDNs to serve its content to subscribers. For example, it used a CDN provided by Cogent to serve content to Comcast customers. Cogent ran out of capacity and refused to upgrade its link to Comcast. As a result, some of Comcast’s customers experienced a decline in quality of Netflix streaming. However, Comcast customers who accessed Netflix with an Apple TV, which is served by CDNs from Level 3 and Limelight, never had any problems. Cogent has had peering disputes in the past with many other networks.

To solve the congestion problem, Netflix and Comcast negotiated a direct interconnection. Instead of Netflix paying Cogent and Cogent paying Comcast, Netflix is now paying Comcast directly. They signed a multi-year deal that is reported to reduce Netflix’s costs relative to what they would have paid through Cogent. Essentially, Netflix is vertically integrating into the CDN business. This makes sense. High-quality CDN service is essential to Netflix’s business; they can’t afford to experience the kind of incident that Cogent caused with Comcast. When a service is strategically important to your business, it’s often a good idea to vertically integrate.

It should be noted that what Comcast and Netflix negotiated was not a “fast lane”—Comcast is prohibited from offering prioritized traffic as a condition of its merger with NBC/Universal.

What about Comcast’s market power?

I think that one of Tim’s hangups is that Comcast has a lot of local market power. There are lots of barriers to creating a competing local ISP in Comcast’s territories. Doesn’t this mean that Comcast will abuse its market power and try to gouge CDNs?

Let’s suppose that Comcast is a pure monopolist in a two-sided market. It’s already extracting the maximum amount of rent that it can on the consumer side. Now it turns to the upstream market and tries to extract rent. The problem with this is that it can only extract rents from upstream content producers insofar as it lowers the value of the rent it can collect from consumers. If customers have to pay higher Netflix bills, then they will be less willing to pay Comcast. The fact that the market is two-sided does not significantly increase the amount of monopoly rent that Comcast can collect.

Interconnection fees that are being paid to Comcast (and virtually all other major ISPs) have virtually nothing to do with Comcast’s market power and everything to do with the fact that the Internet has changed, both in structure and content. This is simply how the Internet works. I use CloudFront, the same CDN that Vox uses, to serve even a small site like my Bitcoin Volatility Index. CloudFront negotiates payments to Comcast and other ISPs on my and Vox’s behalf. There is nothing unseemly about Netflix making similar payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).

For more reading material on the Netflix/Comcast arrangement, I recommend Dan Rayburn’s posts here, here, and here. Interconnection is a very technical subject, and someone with very specialized expertise like Dan is invaluable in understanding this issue.

]]>
http://techliberation.com/2014/05/02/what-vox-doesnt-get-about-the-battle-for-the-future-of-the-internet/feed/ 8
Defining “Technology” http://techliberation.com/2014/04/29/defining-technology/ http://techliberation.com/2014/04/29/defining-technology/#comments Tue, 29 Apr 2014 13:53:07 +0000 http://techliberation.com/?p=74464

I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” I find that frustrating because, if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”

Photo: David HartsteinOf course, it’s not easy. “In fact, technology is a word we use all of the time, and ordinarily it seems to work well enough as a shorthand, catch-all sort of word,” notes the always-insightful Michael Sacasas in his essay “Traditions of Technological Criticism.” “That same sometimes useful quality, however, makes it inadequate and counter-productive in situations that call for more precise terminology,” he says.

Quite right, and for a more detailed and critical discussion of how earlier scholars, historians, and intellectuals have defined or thought about the term “technology,” you’ll want to check out Michael’s other recent essay, “What Are We Talking About When We Talk About Technology?” which preceded the one cited above. We don’t always agree on things — in fact, I am quite certain that most of my comparatively amateurish work must make his blood boil at times! — but you won’t find a more thoughtful technology scholar alive today than Michael Sacasas. If you’re serious about studying technology history and criticism, you should follow his blog and check out his book, The Tourist and The Pilgrim: Essays on Life and Technology in the Digital Age, which is a collection of some of his finest essays.

Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research. I suspect I will add to it in coming months and years, so please feel free to suggest other additions since I would like this to be a useful resource to others.

I figure the easiest thing to do is to just list the definitions by author. There’s no particular order here, although that might change in the future since I could arrange this chronologically and push the inquiry all the way back to how the Greeks thought about the term (the root term techne,” that is). But for now this collection is a bit random and incorporates mostly modern conceptions of “technology” since the term didn’t really gain traction until relatively recent times.

Also, I’ve not bothered critiquing any particular definition or conception of the term, although that may change in the future, too. (I did, however, go after a few modern tech critics briefly in my recent booklet, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” So, you might want to check that out for more on how I feel, as well as my old essays, “What Does It Mean to ‘Have a Conversation’ about a New Technology?” and, “On the Line between Technology Ethics vs. Technology Policy.”)

So, I’ll begin with two straight-forward definitions from the Merriam-Webster and Oxford dictionaries and then bring in the definitions from various historians and critics.


Merriam-Webster Dictionary

Technology (noun):

1)     (a): the practical application of knowledge especially in a particular area; (b): a capability given by the practical application of knowledge

2)      a manner of accomplishing a task especially using technical processes, methods, or knowledge.

3)      the specialized aspects of a particular field of endeavor.

Oxford Dictionary

Technology (noun):

1)      The application of scientific knowledge for practical purposes, especially in industry.

2)      Machinery and devices developed from scientific knowledge.

3)      The branch of knowledge dealing with engineering or applied sciences.

 

Thomas P. Hughes

I have always loved the opening passage from Thomas Hughes’s 2004 book, Human-Built World: How to Think about Technology and Culture:

“Technology is messy and complex. It is difficult to define and to understand. In its variety, it is full of contradictions, laden with human folly, saved by occasional benign deeds, and rich with unintended consequences.” (p. 1) “Defining technology in its complexity,” he continued, “is as difficult as grasping the essence of politics.” (p. 2)

So true! Nonetheless, Hughes went on to offer his own definition of technology as:

“a creativity process involving human ingenuity.” (p. 3)

Interestingly, in another book, American Genesis: A Century of Invention and Technological Enthusiasm, 1870-1970, he offered a somewhat different definition:

“Technology is the effort to organize the world for problem solving so that goods and services can be invented, developed, produced, and used.” (p. 6, 2004 ed., emphasis in original.)

 

W. Brian Arthur

In his 2009 book, The Nature of Technology: What It Is and How It Evolves, W. Brian Arthur sketched out three conceptions of technology.

1)      “The first and most basic one is a technology is a means to fulfill a human purpose. … As a means, a technology may be a method or process or device… Or it may be complicated… Or it may be material… Or it may be nonmaterial. Whichever it is, it is always a means to carry out a human purpose.”

2)      “The second definition is a plural one: technology as an assemblage of practices and components.”

3)      “I will also allow a third meaning. This technology as the entire collection of devices and engineering practices available to a culture.” (p. 28, emphasis in original.)

 

Alfred P. Sloan Foundation / Richard Rhodes

In his 1999 book, Visions of Technology: A Century Of Vital Debate About Machines Systems And The Human World, Pulitizer Prize-winning historian Richard Rhodes assembled a wonderful collection of essays about technology that spanned the entire 20th century. It’s a terrific volume to have on your bookshelf if want a quick overview of how over a hundred leading scholars, critics, historians, scientists, and authors thought about technology and technological advances.

The collection kicked off with a brief preface from the Alfred P. Sloan Foundation (no specific Foundation author was listed) that included one of the most succinct definitions of the term you’ll ever read:

“Technology is the application of science, engineering and industrial organization to create a human-build world.” (p. 19)

Just a few pages later, however, Rhodes notes that is probably too simplistic:

“Ask a friend today to define technology and you might hear words like ‘machines,’ ‘engineering,’ ‘science.’ Most of us aren’t even sure where science leaves off and technology begins. Neither are the experts.”

Again, so true!

 

Joel Mokyr

Lever of Riches: Technological Creativity and Economic Progress(1990) by Joel Mokyr is one of the most readable and enjoyable histories of technology you’ll ever come across. I highly recommend it. [My thanks to my friend William Rinehart for bringing the book to my attention.]  In Lever of Riches, Mokyr defines “technological progress” as follows:

“By technological progress I mean any change in the application of information to the production process in such a way as to increase efficiency, resulting either in the production of a given output with fewer resources (i.e., lower costs), or the production of better or new products.” (p. 6)

 

Edwin Mansfield

You’ll find definitions of both “technology” and “technological change” in Edwin Mansfield’s Technological Change: An Introduction to a Vital Area of Modern Economics (1968, 1971):

“Technology is society’s pool of knowledge regarding the industrial arts. It consists of knowledge used by industry regarding the principles of physical and social phenomena… knowledge regarding the application of these principles to production… and knowledge regarding the day-to-day operations of production…”

“Technological change is the advance of technology, such advance often taking the form of new methods of producing existing products, new designs which enable the production of products with important new characteristics, and new techniques of organization, marketing, and management.” (p. 9-10)

 

Read Bain

In his December 1937 essay in Vol. 2, Issue No. 6 of the American Sociological Review, “Technology and State Government,” Read Bain said:

 “technology includes all tools, machines, utensils, weapons, instruments, housing, clothing, communicating and transporting devices and the skills by which we produce and use them.” (p. 860)

[My thanks to Jasmine McNealy for bringing this one to my attention.]

 

David M. Kaplan

Found this one thanks to Sacasas. It’s from David M. Kaplan, Ricoeur’s Critical Theory (2003), which I have not yet had the chance to read:

“Technologies are best seen as systems that combine technique and activities with implements and artifacts, within a social context of organization in which the technologies are developed, employed, and administered. They alter patterns of human activity and institutions by making worlds that shape our culture and our environment. If technology consists of not only tools, implements, and artifacts, but also whole networks of social relations that structure, limit, and enable social life, then we can say that a circle exists between humanity and technology, each shaping and affecting the other. Technologies are fashioned to reflect and extend human interests, activities, and social arrangements, which are, in turn, conditioned, structured, and transformed by technological systems.”

I liked Michael’s comment on this beefy definition: “This definitional bloat is a symptom of the technological complexity of modern societies. It is also a consequence of our growing awareness of the significance of what we make.”

 

Jacques Ellul

Jacques Ellul, a French theologian and sociologist, penned a massive, 440-plus page work of technological criticism in 1954, La Technique ou L’enjeu du Siècle (1954), which was later translated in English as, The Technological Society(New York: Vintage Books, 1964). In setting forth his critique of modern technological society, he used the term “technique” repeatedly and contrasted with “technology.” He defined technique as follows:

“The term technique, as I use it, does not mean machines, technology, or this or that procedure for attaining an end. In our technological society, technique is the totality of methods rationally arrived at and having absolute efficiency (for a given state of development) in every field of human activity. […]

Technique is not an isolated fact in society (as the term technology would lead us to believe) but is related to every factor in the life of modern man; it affects social facts as well as all others. Thus technique itself is a sociological phenomenon…” (p. xxvi, emphasis in original.)

 

Bernard Stiegler

In La technique et le temps, 1: La faute d’Épiméthée, or translated, Technics and Time, 1: The Fault of Epimetheus (1998), French philosopher Bernard Stiegler defines technology as:

“the pursuit of life by means other than life”

[I found that one here.]

 


Again, please feel free to suggest additions to this compendium that future students and scholars might find useful. I hope that this can become a resource to them.

Additional Reading:

 

]]>
http://techliberation.com/2014/04/29/defining-technology/feed/ 2
New Essays about Permissionless Innovation & Why It Matters http://techliberation.com/2014/04/27/new-essays-about-permissionless-innovation-why-it-matters/ http://techliberation.com/2014/04/27/new-essays-about-permissionless-innovation-why-it-matters/#comments Sun, 27 Apr 2014 22:11:12 +0000 http://techliberation.com/?p=74459

This past week I posted two new essays related to my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” Just thought I would post quick links here.

First, my old colleague Dan Rothschild was kind enough to ask me to contribute a post to the R Street Blog entitled, “Bucking the ‘Mother, May I?’ Mentality.” In it, I offered this definition and defense of permissionless innovation as a policy norm:

Permissionless innovation is about the creativity of the human mind to run wild in its inherent curiosity and inventiveness, even when it disrupts certain cultural norms or economic business models. It is that unhindered freedom to experiment that ushered in many of the remarkable technological advances of modern times. In particular, all the digital devices, systems and networks that we now take for granted came about because innovators were at liberty to let their minds run wild.

Steve Jobs and Apple didn’t need a permit to produce the first iPhone. Jeff Bezos and Amazon didn’t need to ask anyone for the right to create a massive online marketplace. When Sergey Brin and Larry Page wanted to release Google’s innovative search engine into the wild, they didn’t need to get a license first. And Mark Zuckerberg never had to get anyone’s blessing to launch Facebook or let people freely create their own profile pages.

All of these digital tools and services were creatively disruptive technologies that altered the fortunes of existing companies and challenged various social norms. Luckily, however, nothing preemptively stopped that innovation from happening. Today, the world is better off because of it, with more and better information choices than ever before.

I also posted an essay over on Medium entitled, “Why Permissionless Innovation Matters.” It’s a longer essay that seeks to answer the question: Why does economic growth occur in some societies & not in others? I build on the recent comments of venture capitalist Fred Wilson of Union Square Ventures noted during recent testimony: “If you look at the countries around the world where the most innovation happens, you will see a very high, I would argue a direct, correlation between innovation and freedom. They are two sides of the same coin.” I continue on to argue in my essay:

that’s true in both a narrow and broad sense. It’s true in a narrow sense that innovation is tightly correlated with the general freedom to experiment, fail, and learn from it. More broadly, that general freedom to experiment and innovate is highly correlated with human freedom in the aggregate.

Indeed, I argue in my book that we can link an embrace of dynamism and permissionless innovation to the expansion of cultural and economic freedom throughout history. In other words, there is a symbiotic relationship between freedom and progress. In his book, History of the Idea of Progress, Robert Nisbet wrote of those who adhere to “the belief that freedom is necessary to progress, and that the goal of progress, from most distant past to the remote future, is ever-ascending realization of freedom.” That’s generally the ethos that drives the dynamist vision and that also explains why getting the policy incentives right matters so much. Freedom — including the general freedom to engage in technological tinkering, endless experimentation, and acts of social and economic entrepreneurialism — is essential to achieving long-term progress and prosperity.

I also explain how the United States generally got policy right for the Internet and the digital economy in the 1990s by embracing this vision and enshrining it into law in various ways. I conclude by noting that:

If we hope to encourage the continued development of even more “technologies of freedom,” and enjoy the many benefits they provide, we must make sure that, to the maximum extent possible, the default position toward new forms of technological innovation remains “innovation allowed.” Permissionless innovation should, as a general rule, trump precautionary principle thinking. The burden of proof rests on those who favor precautionary policy prescriptions to explain why ongoing experimentation with new ways of doing things should be prevented preemptively.

Again, read the entire thing over at Medium. Also, over at Circle ID this week, Konstantinos Komaitis published a related essay, “Permissionless Innovation: Why It Matters,” in which he argued that “Permissionless innovation is key to the Internet’s continued development. We should preserve it and not question it.” He was kind enough to quote my book in that essay. I encourage you to check out his piece.

]]>
http://techliberation.com/2014/04/27/new-essays-about-permissionless-innovation-why-it-matters/feed/ 1