March 2014

Give us our drone-delivered beer!

That’s how the conversation got started between John Stossel and me on his show this week. I appeared on Stossel’s Fox Business TV show to discuss the many beneficial uses of private drones. The problem is that drones — which are more appropriately called unmanned aircraft systems — have an image problem. When we think about drones today, they often conjure up images of nefarious military machines dealing death and destruction from above in a far-off land. And certainly plenty of that happens today (far, far too much in my personal opinion, but that’s a rant best left for another day!).

But any technology can be put to both good and bad uses, and drones are merely the latest in a long list of “dual-use technologies,” which have both military uses and peaceful private uses. Other examples of dual-use technologies include: automobiles, airplanes, ships, rockets and propulsion systems, chemicals, computers and electronic systems, lasers, sensors, and so on. Put simply, almost any technology that can be used to wage war can also be used to wage peace and commerce. And that’s equally true for drones, which come in many sizes and have many peaceful, non-military uses. Thus, it would be wrong to judge them based upon their early military history or how they are currently perceived. (After all, let’s not forget that the Internet’s early origins were militaristic in character, too!)

Some of the other beneficial uses and applications of unmanned aircraft systems include: agricultural (crop inspection & management, surveying); environmental (geological, forest management, tornado & hurricane research); industrial (site & service inspection, surveying); infrastructure management (traffic and accident monitoring); public safety (search & rescue, post-natural disaster services, other law enforcement); and delivery services (goods & parcels, food & beverages, flowers, medicines, etc.), just to name a few.

Continue reading →

Some recent tech news provides insight into the trajectory of broadband and television markets. These stories also indicate a poor prognosis for a net neutrality. Political and ISP opposition to new rules aside (which is substantial), even net neutrality proponents point out that “neutrality” is difficult to define and even harder to implement. Now that the line between “Internet video” and “television” delivered via Internet Protocol (IP) is increasingly blurring, net neutrality goals are suffering from mission creep.

First, there was the announcement that Netflix, like many large content companies, was entering into a paid peering agreement with Comcast, prompting a complaint from Netflix CEO Reed Hastings who argued that ISPs have too much leverage in negotiating these interconnection deals.

Second, Comcast and Apple discussed a possible partnership whereby Comcast customers would receive prioritized access to Apple’s new video service. Apple’s TV offering would be a “managed service” exempt from net neutrality obligations.

Interconnection and managed services are generally not considered net neutrality issues. They are not “loopholes.” They were expressly exempted from the FCC’s 2010 (now-defunct) rules. However, net neutrality proponents are attempting to bring interconnection and managed services to the FCC’s attention as the FCC crafts new net neutrality rules. Net neutrality proponents have an uphill battle already, and the following trends won’t help. Continue reading →

Most conservatives and many prominent thinkers on the left agree that the Communications Act should be updated based on the insight provided by the wireless and Internet protocol revolutions. The fundamental problem with the current legislation is its disparate treatment of competitive communications services. A comprehensive legislative update offers an opportunity to adopt a technologically neutral, consumer focused approach to communications regulation that would maximize competition, investment and innovation.

Though the Federal Communications Commission (FCC) must continue implementing the existing Act while Congress deliberates legislative changes, the agency should avoid creating new regulatory disparities on its own. Yet that is where the agency appears to be heading at its meeting next Monday. Continue reading →

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →

The Mercatus Center at George Mason University has released a new working paper by Daniel A. Lyons, professor at Boston College Law School, entitled “Innovations in Mobile Broadband Pricing.”

In 2010, the FCC passed net neutrality rules for mobile carriers and ISPs that included a “no blocking” provision (since struck down in FCC v. Verizon). The FCC prohibited mobile carriers from blocking Internet content and promised to scrutinize carriers’ non-standard pricing decisions. These broad regulations had a predictable chilling effect on firms trying new business models. For instance, Lyons describes how MetroPCS was hit with a net neutrality complaint because it allowed YouTube but not other video streaming sites on its budget LTE plan (something I’ve written on). Some critics also allege that AT&T’s Sponsored Data program is a net neutrality violation.

In his paper, Lyons explains that the FCC might still regulate mobile networks but advises against a one-size-fits-all net neutrality approach. Instead, he encourages regulatory humility in order to promote investment in mobile networks and devices and to allow new business models. For support, he points out that several developing and rich countries have permitted commercial arrangements between content companies and carriers that arguably violate principles of net neutrality. Lyons makes the persuasive argument that these “non-neutral” service bundles and pricing decisions on the whole, rather than harming consumers, expand online access and ease non-connected populations into the Internet Age. As Lyons says,

The wide range of successful wireless innovations and partnerships at the international level should prompt U.S. regulators to rethink their commitment to a rigid set of rules that limit flexibility in American broadband markets. This should be especially true in the wireless broadband space, where complex technical considerations, rapid change, and robust competition make for anything but a stable and predictable business environment.

Further,

In the rapidly changing world of information technology, it is sometimes easy to forget that experimental new pricing models can be just as innovative as new technological developments. By offering new and different pricing models, companies can provide better value to consumers or identify niche segments that are not well-served by dominant pricing strategies.

Despite the January 2014 court decision striking down the FCC’s net neutrality rules, it’s an issue that hasn’t died. Lyons’ research provides support for the position that a fixation on enforcing net neutrality, however defined, distracts policymakers from serious discussion of how to expand online access. Rules should be written with consumers and competition in mind. Wired ISPs get the lion’s share of scholars’ attention when discussing net neutrality. In an increasingly wireless world, Lyon’s paper provides important research to guide future US policies.

The Internet began as a U.S. military project. For two decades, the government restricted access to the network to government, academic, and other authorized non-commercial use. In 1989, the U.S. gave up control—it allowed private, commercial use of the Internet, a decision that allowed it to flourish and grow as few could imagine at the time.

Late Friday, the NTIA announced its intent to give up the last vestiges of its control over the Internet, the last real evidence that it began as a government experiment. Control of the Domain Name System’s (DNS’s) Root Zone File has remained with the agency despite the creation of ICANN in 1998 to perform the other high-level domain name functions, called the IANA functions.

The NTIA announcement is not a huge surprise. The U.S. government has always said it eventually planned to devolve IANA oversight, albeit with lapsed deadlines and changes of course along the way.

The U.S. giving up control over the Root Zone File is a step toward a world in which governments no longer assert oversight over the technology of communication. Just as freedom of the printing press was important to the founding generation in America, an unfettered Internet is essential to our right to unimpeded communication. I am heartened to see that the U.S. will not consider any proposal that involves IANA oversight by an intergovernmental body.

Relatedly, next month’s global multistakeholder meeting in Brazil will consider principles and roadmaps for the future of Internet governance. I have made two contributions to the meeting, a set of proposed high-level principles that would limit the involvement of governments in Internet governance to facilitating participation by their nationals, and a proposal to support experimentation in peer-to-peer domain name systems. I view these proposals as related: the first keeps governments away from Internet governance and the second provides a check against ICANN simply becoming another government in control of the Internet.

Shane Greenstein, Kellogg Chair in Information Technology at Northwestern’s Kellogg School of Management, discusses his recent paper, Collective Intelligence and Neutral Point of View: The Case of Wikipedia , coauthored by Harvard assistant professor Feng Zhu. Greenstein and Zhu’s paper takes a look at whether Linus’ Law applies to Wikipedia articles. Do Wikipedia articles have a slant or bias? If so, how can we measure it? And, do articles become less biased over time, as more contributors become involved? Greenstein explains his findings.

Download

Related Links

Sprint’s Chairman, Masayoshi Son, is coming to Washington to explain how wireless competition in the US would be improved if only there were less of it.

After buying Sprint last year for $21.6 billion, he has floated plans to buy T-Mobile. When antitrust officials voiced their concerns about the proposed plan’s potential impact on wireless competition, Son decided to respond with an unusual strategy that goes something like this: The US wireless market isn’t competitive enough, so policymakers need to approve the merger of the third and fourth largest wireless companies in order to improve competition, because going from four nationwide wireless companies to three will make things even more competitive. Got it? Me neither. Continue reading →

Yesterday, an administrative judge ruled in Huerta v. Pirker that the FAA’s “rules” banning commercial drones don’t have the force of law because the agency never followed the procedures required to enact them as an official regulation. The ruling means that any aircraft that qualifies as a “model aircraft” plausibly operates under laissez-faire. Entrepreneurs are free for now to develop real-life TacoCopters, and Amazon can launch its Prime Air same-day delivery service.

Laissez-faire might not last. The FAA could appeal the ruling, try to issue an emergency regulation, or simply wait 18 months or so until its current regulatory proceedings culminate in regulations for commercial drones. If they opt for the last of these, then the drone community has an interesting opportunity to show that regulations for small commercial drones do not pass a cost-benefit test. So start new drone businesses, but as Matt Waite says, “Don’t do anything stupid. Bad actors make bad policy.”

Kudos to Brendan Schulman, the attorney for Pirker, who has been a tireless advocate for the freedom to innovate using drone technology. He is on Twitter at @dronelaws, and if you’re at all interested in this issue, he is a great person to follow.

The House Subcommittee on Communications and Technology will soon consider whether to reauthorize the Satellite Television Extension and Localism Act (STELA) set to expire at the end of the year. A hearing scheduled for this week has been postponed on account of weather.

Congress ought to scrap the current compulsory license in STELA that governs the importation of distant broadcast signals by Direct Broadcast Satellite providers.  STELA is redundant and outdated. The 25 year-old statute invites rent-seeking every time it comes up for reauthorization.

At the same time, Congress should also resist calls to use the STELA reauthorization process to consider retransmission consent reforms.  The retransmission consent framework is designed to function like the free market and is not the problem.

Continue reading →