Later today I’ll be testifying at a hearing before the House Small Business Committee titled “Bitcoin: Examining the Benefits and Risks for Small Business.” It will be live streamed starting at 1 p.m. My testimony will be available on the Mercatus website at that time, but below is some of my work on Bitcoin in case you’re new to the issue.

Also, tonight I’ll be speaking at a great event hosted by the DC FinTech meetup on “Bitcoin & the Internet of Money.” I’ll be joined by Bitcoin core developer Jeff Garzik and we’ll be interviewed on stage by Joe Weisenthal of Business Insider. It’s open to the public, but you have to RSVP.

Finally, stay tuned because in the next couple of days my colleagues Houman Shadab, Andrea Castillo, and I will be posting a draft of our new law review article looking at Bitcoin derivatives, prediction markets, and gambling. Bitcoin is the most fascinating issue I’ve ever worked on.

Here’s Some Bitcoin Reading…

And here’s my interview with Reihan Salam discussing Bitcoin…

Last December, it was my pleasure to take part in a great event, “The Disruptive Competition Policy Forum,” sponsored by Project DisCo (or The Disruptive Competition Project). It featured several excellent panels and keynotes and they’ve just posted the video of the panel I was on here and I have embedded it below. In my remarks, I discussed:

  • benefit-cost analysis in digital privacy debates (building on this law review article);
  • the contrast between Europe and America’s approach to data & privacy issues (referencing this testimony of mine);
  • the problem of “technopanics” in information policy debates (building on this law review article);
  • the difficulty of information control efforts in various tech policy debates (which I wrote about in this law review article and these two blog posts: 1, 2);
  • the possibility of less-restrictive approaches to privacy & security concerns (which I have written about here as well in those other law review articles);
  • the rise of the Internet of Things and the unique challenges it creates (see this and this as well as my new book); and,
  • the possibility of a splintering of the Internet or the rise of “federated Internets.”

The panel was expertly moderated by Ross Schulman, Public Policy & Regulatory Counsel for CCIA, and also included remarks from John Boswell, SVP & Chief Legal Officer at SAS, and Josh Galper, Chief Policy Officer and General Counsel of Personal, Inc. (By the way, you should check out some of the cool things Personal is doing in this space to help consumers. Very innovative stuff.) The video lasts one hour. Here it is:

After yesterday’s FCC meeting, it appears that Chairman Wheeler has a finely tuned microscope trained on broadcasters and a proportionately large blind spot for the cable television industry.

Yesterday’s FCC meeting was unabashedly pro-cable and anti-broadcaster. The agency decided to prohibit television broadcasters from engaging in the same industry behavior as cable, satellite, and telco television distributors and programmers. The resulting disparity in regulatory treatment highlights the inherent dangers in addressing regulatory reform piecemeal rather than comprehensively as contemplated by the #CommActUpdate. Congress should lead the FCC by example and adopt a “clean” approach to STELA reauthorization that avoids the agency’s regulatory mistakes.

The FCC meeting offered a study in the way policymakers pick winners and losers in the marketplace without acknowledging unfair regulatory treatment. It’s a three-step process.

  • First, the policymaker obfuscates similarities among issues by referring to substantively similar economic activity across multiple industry segments using different terminology.
  • Second, it artificially narrows the issues by limiting any regulatory inquiry to the disfavored industry segment only.
  • Third, it adopts disparate regulations applicable to the disfavored industry segment only while claiming the unfair regulatory treatment benefits consumers.

The broadcast items adopted by the FCC yesterday hit all three points. Continue reading →

Give us our drone-delivered beer!

That’s how the conversation got started between John Stossel and me on his show this week. I appeared on Stossel’s Fox Business TV show to discuss the many beneficial uses of private drones. The problem is that drones — which are more appropriately called unmanned aircraft systems — have an image problem. When we think about drones today, they often conjure up images of nefarious military machines dealing death and destruction from above in a far-off land. And certainly plenty of that happens today (far, far too much in my personal opinion, but that’s a rant best left for another day!).

But any technology can be put to both good and bad uses, and drones are merely the latest in a long list of “dual-use technologies,” which have both military uses and peaceful private uses. Other examples of dual-use technologies include: automobiles, airplanes, ships, rockets and propulsion systems, chemicals, computers and electronic systems, lasers, sensors, and so on. Put simply, almost any technology that can be used to wage war can also be used to wage peace and commerce. And that’s equally true for drones, which come in many sizes and have many peaceful, non-military uses. Thus, it would be wrong to judge them based upon their early military history or how they are currently perceived. (After all, let’s not forget that the Internet’s early origins were militaristic in character, too!)

Some of the other beneficial uses and applications of unmanned aircraft systems include: agricultural (crop inspection & management, surveying); environmental (geological, forest management, tornado & hurricane research); industrial (site & service inspection, surveying); infrastructure management (traffic and accident monitoring); public safety (search & rescue, post-natural disaster services, other law enforcement); and delivery services (goods & parcels, food & beverages, flowers, medicines, etc.), just to name a few.


Continue reading →

Some recent tech news provides insight into the trajectory of broadband and television markets. These stories also indicate a poor prognosis for a net neutrality. Political and ISP opposition to new rules aside (which is substantial), even net neutrality proponents point out that “neutrality” is difficult to define and even harder to implement. Now that the line between “Internet video” and “television” delivered via Internet Protocol (IP) is increasingly blurring, net neutrality goals are suffering from mission creep.

First, there was the announcement that Netflix, like many large content companies, was entering into a paid peering agreement with Comcast, prompting a complaint from Netflix CEO Reed Hastings who argued that ISPs have too much leverage in negotiating these interconnection deals.

Second, Comcast and Apple discussed a possible partnership whereby Comcast customers would receive prioritized access to Apple’s new video service. Apple’s TV offering would be a “managed service” exempt from net neutrality obligations.

Interconnection and managed services are generally not considered net neutrality issues. They are not “loopholes.” They were expressly exempted from the FCC’s 2010 (now-defunct) rules. However, net neutrality proponents are attempting to bring interconnection and managed services to the FCC’s attention as the FCC crafts new net neutrality rules. Net neutrality proponents have an uphill battle already, and the following trends won’t help. Continue reading →

Most conservatives and many prominent thinkers on the left agree that the Communications Act should be updated based on the insight provided by the wireless and Internet protocol revolutions. The fundamental problem with the current legislation is its disparate treatment of competitive communications services. A comprehensive legislative update offers an opportunity to adopt a technologically neutral, consumer focused approach to communications regulation that would maximize competition, investment and innovation.

Though the Federal Communications Commission (FCC) must continue implementing the existing Act while Congress deliberates legislative changes, the agency should avoid creating new regulatory disparities on its own. Yet that is where the agency appears to be heading at its meeting next Monday. Continue reading →

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →

The Mercatus Center at George Mason University has released a new working paper by Daniel A. Lyons, professor at Boston College Law School, entitled “Innovations in Mobile Broadband Pricing.”

In 2010, the FCC passed net neutrality rules for mobile carriers and ISPs that included a “no blocking” provision (since struck down in FCC v. Verizon). The FCC prohibited mobile carriers from blocking Internet content and promised to scrutinize carriers’ non-standard pricing decisions. These broad regulations had a predictable chilling effect on firms trying new business models. For instance, Lyons describes how MetroPCS was hit with a net neutrality complaint because it allowed YouTube but not other video streaming sites on its budget LTE plan (something I’ve written on). Some critics also allege that AT&T’s Sponsored Data program is a net neutrality violation.

In his paper, Lyons explains that the FCC might still regulate mobile networks but advises against a one-size-fits-all net neutrality approach. Instead, he encourages regulatory humility in order to promote investment in mobile networks and devices and to allow new business models. For support, he points out that several developing and rich countries have permitted commercial arrangements between content companies and carriers that arguably violate principles of net neutrality. Lyons makes the persuasive argument that these “non-neutral” service bundles and pricing decisions on the whole, rather than harming consumers, expand online access and ease non-connected populations into the Internet Age. As Lyons says,

The wide range of successful wireless innovations and partnerships at the international level should prompt U.S. regulators to rethink their commitment to a rigid set of rules that limit flexibility in American broadband markets. This should be especially true in the wireless broadband space, where complex technical considerations, rapid change, and robust competition make for anything but a stable and predictable business environment.

Further,

In the rapidly changing world of information technology, it is sometimes easy to forget that experimental new pricing models can be just as innovative as new technological developments. By offering new and different pricing models, companies can provide better value to consumers or identify niche segments that are not well-served by dominant pricing strategies.

Despite the January 2014 court decision striking down the FCC’s net neutrality rules, it’s an issue that hasn’t died. Lyons’ research provides support for the position that a fixation on enforcing net neutrality, however defined, distracts policymakers from serious discussion of how to expand online access. Rules should be written with consumers and competition in mind. Wired ISPs get the lion’s share of scholars’ attention when discussing net neutrality. In an increasingly wireless world, Lyon’s paper provides important research to guide future US policies.

The Internet began as a U.S. military project. For two decades, the government restricted access to the network to government, academic, and other authorized non-commercial use. In 1989, the U.S. gave up control—it allowed private, commercial use of the Internet, a decision that allowed it to flourish and grow as few could imagine at the time.

Late Friday, the NTIA announced its intent to give up the last vestiges of its control over the Internet, the last real evidence that it began as a government experiment. Control of the Domain Name System’s (DNS’s) Root Zone File has remained with the agency despite the creation of ICANN in 1998 to perform the other high-level domain name functions, called the IANA functions.

The NTIA announcement is not a huge surprise. The U.S. government has always said it eventually planned to devolve IANA oversight, albeit with lapsed deadlines and changes of course along the way.

The U.S. giving up control over the Root Zone File is a step toward a world in which governments no longer assert oversight over the technology of communication. Just as freedom of the printing press was important to the founding generation in America, an unfettered Internet is essential to our right to unimpeded communication. I am heartened to see that the U.S. will not consider any proposal that involves IANA oversight by an intergovernmental body.

Relatedly, next month’s global multistakeholder meeting in Brazil will consider principles and roadmaps for the future of Internet governance. I have made two contributions to the meeting, a set of proposed high-level principles that would limit the involvement of governments in Internet governance to facilitating participation by their nationals, and a proposal to support experimentation in peer-to-peer domain name systems. I view these proposals as related: the first keeps governments away from Internet governance and the second provides a check against ICANN simply becoming another government in control of the Internet.

Shane Greenstein, Kellogg Chair in Information Technology at Northwestern’s Kellogg School of Management, discusses his recent paper, Collective Intelligence and Neutral Point of View: The Case of Wikipedia , coauthored by Harvard assistant professor Feng Zhu. Greenstein and Zhu’s paper takes a look at whether Linus’ Law applies to Wikipedia articles. Do Wikipedia articles have a slant or bias? If so, how can we measure it? And, do articles become less biased over time, as more contributors become involved? Greenstein explains his findings.

Download

Related Links