U.S. Commodity Futures Trading Commission (CFTC) Commissioner J. Christopher Giancarlo delivered an amazing address this week before the Depository Trust & Clearing Corporation 2016 Blockchain Symposium. The title of his speech was “Regulators and the Blockchain: First, Do No Harm,” and it will go down as the definitive early statement about how policymakers can apply a principled, innovation-enhancing policy paradigm to distributed ledger technology (DLT) or “blockchain” applications.
“The potential applications of this technology are being widely imagined and explored in ways that will benefit market participants, consumers and governments alike,” Giancarlo noted in his address. But in order for that to happen, he said, we have to get policy right. “It is time again to remind regulators to ‘do no harm,'” he argued, and he continued on to note that
The United States’ global leadership in technological innovation of the Internet was built hand-in-hand with its enlightened “do no harm” regulatory framework. Yet, when the Internet developed in the mid-1990s, none of us could have imagined its capabilities that we take for granted today. Fortunately, policymakers had the foresight to create a regulatory environment that served as a catalyst rather than a choke point for innovation. Thanks to their forethought and restraint, Internet-based applications have revolutionized nearly every aspect of human life, created millions of jobs and increased productivity and consumer choice. Regulators must show that same forethought and restraint now [for the blockchain].
What Giancarlo is referring to is the approach that the U.S. government adopted toward the Internet and digital networks in the mid-1990s. You can think of this vision as “permissionless innovation.” As I explain in my recent book of the same title, permissionless innovation refers to the notion that we should generally be free to experiment and learn new and better ways of doing things through ongoing trial-and-error. Continue reading →
[This is an excerpt from Chapter 6 of the forthcoming 2nd edition of my book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” due out later this month. I was presenting on these issues at today’s New America Foundation “Cybersecurity for a New America” event, so I thought I would post this now. To learn more about the contrast between “permissionless innovation” and “precautionary principle” thinking, please consult the earlier edition of my book or see this blog post.]
Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals. But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.
In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,” a “cyber cold war,” or even a “cyber 9/11.” These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code. Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”
Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security. Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.” Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security. Continue reading →
The success of the Internet and the modern digital economy was due to its open, generative nature, driven by the ethos of “permissionless innovation.” A “light-touch” policy regime helped make this possible. Of particular legal importance was the immunization of online intermediaries from punishing forms of liability associated with the actions of third parties.
As “software eats the world” and the digital revolution extends its reach to the physical world, policymakers should extend similar legal protections to other “generative” tools and platforms, such as robotics, 3D printing, and virtual reality.
In other words, we need a Section 230 for the “maker” movement. Continue reading →
I was shocked and saddened to hear tonight that L.A. Superior Court Judge Dan Brenner was struck and killed in Los Angeles yesterday. I am just sick about it. He was a great man and good friend.
Dan was an outstanding legal mind who, before moving back out to California to become a judge in 2012, made a big impact here in DC while serving as a legal advisor to FCC chairman Mark Fowler in the 1980s. He went on to have a distinguished career as head of legal affairs at the National Cable & Telecommunications Association. He also served as an adjunct law professor in major law schools and wrote important essays and textbooks on media and broadband law.
More than all that, Dan Brenner was a dear friend to a great many people, and he was always the guy with the biggest smile on his face in any room he walked into. Dan had an absolutely infectious spirit; his amazing wit and wisdom inspired everyone around him. I never heard a single person say a bad word about Dan Brenner. Even people on the opposite side of any negotiating table from him respected and admired him. That’s pretty damn rare in a town like Washington, DC.
And Dan was a great friend to me. Continue reading →
Throughout the year, I collect some of the more notable tech policy-related essays that I’ve read and then publish an end-of-year list here. (Here, for example, are my end-of-year lists from 2014 and 2013.) So, here are some of my favorite essays and editorials from 2015. (Note: They are just in chronological order. No ranking here.)
- Larry Downes – “Take note Republicans and Democrats, this is what a pro-innovation platform looks like,” Washington Post, January 7. (Downes explains how governments need to adapt to accommodate and embrace new forms of technological innovation. He notes: “Here at home, the opportunity to wrap themselves in the flag of innovation is knocking for both parties, but so far there are few takers. Republicans and Democrats regularly invoke the rhetoric of innovation, entrepreneurship, and the transformative power of technology. But in reality neither party pursues policies that favor the disruptors. Instead, where lawmakers once took a largely hands-off approach to Silicon Valley, as the Internet revolution enters a new stage of industry transformation, the temptation to intervene, to usurp, to micromanage, to circumscribe the future — becomes irresistible.”) Equally excellent was Larry’s essay later in the year, “Fewer, Faster, Smarter.” (“As the technology revolution proceeds, the concept of government may return to its pre-industrial roots, setting the most basic rules of the economy and standing by as regulator of last resort when markets fail for some or all consumers over an extended period of time. Even then, the solution may simply be to tweak the incentives to encourage better behavior, rather than more full-fledged—and usually ill-fated—micromanagement of fast-changing industries.”)
- Bryant Walker Smith – “Slow Down That Runaway Ethical Trolley,” CIS Blog, January 12. (Smith, a leading expert on autonomous vehicle systems, notes that, while serious ethical dilemmas will always be present with such technologies, we should not allow the perfect to be the enemy of the good. “The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?”)
- Tim Worstall – “Google gets my data, I get search and email and that. Help help, I’m being OPPRESSED!” The Register, February 4. (A wicked tongue-lashing of the critics of the data-driven economy.)
- Aki Ito – “Six Things Technology Has Made Insanely Cheap: Behold the power of American progress,” Bloomberg Business, February 5. (The title says it all.)
- Andrew McAfee – “Who are the humanists, and why do they dislike technology so much?” Financial Times, July 7, 2015. (A brief but brilliant exploration of the philosophical fight over differing conceptions of “humanism.” McAfee, appropriately in my opinion, calls into question technological critics who self-label themselves “humanists” and then suggest that those who believe in the benefits of technological innovation and progress are somehow opposed to humanity. In reality, of course, nothing could be further from the truth!)
- Jocelyn Brewer – “Techno-Fear is Hurting Kids, Not Their Use of Digital Devices,” July 7, 2015. (A beautiful piece that makes it clear why “the Internet… is not addictive. Technology is not a drug.” Brewer continues on to make the case for avoiding fear-based messaging about Internet problems and instead adopting a more sensible approach: “Rather than trotting out interminable lists of the negative consequences of our adoption of technology lets raise awareness of how to avoid the pitfalls of not approaching this new era with solutions and proactive thinking.” Amen, sister!)
- Evan Ackerman – “We Should Not Ban ‘Killer Robots,’ and Here’s Why,” IEEE Spectrum, July 29, 2015, (A thought-provoking piece about a controversial subject in which Ackerman argues that “banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil”)
- Tim O’Reilly – “Networks and the Nature of the Firm,” Medium, August 14, 2015. (Explores the economics of the sharing economy and “the huge economic shift led by software and connectedness.”)
- Joe Queenan – “America’s Need for Pointless Updates and Cat Videos,” Wall Street Journal, December 3, 2015. (“The back-to-nature, turn-off-your-cellphone movement is based on a false assumption. . . . Time not spent doing dumb stuff would otherwise be wasted doing other dumb stuff. It’s called ‘play,’ without which Jack is a dull boy. It is a variation on the old saying that nature abhors a vacuum. So nature created the Internet.”)
- Dominic Basulto – “Can we just stop with all these tech dystopia stories?” Washington Post, Dec 8, 2015. (“Yes, a dystopian future is possible, but so is a utopian future. Most likely, the answer is somewhere in the middle, the way it’s been for millennia.”)
Today, the U.S. Department of Transportation and the Federal Aviation Administration (FAA) announced that it will soon require Unmanned Aircraft Systems (UAS) or private drones, used for both personal and commercial purposes, to be registered in a national database. To facilitate this process, the agencies announced the creation of a new federal task force that will develop recommendations for a UAS registration process. Rules are to be published by November 20th (presumably to cover new devices sold before Christmas).
Here are some quick initial reactions on the proposed registration rules: Continue reading →
One of my favorite themes, and not just in the field of tech policy, is the “Unintended Consequences of Well-Intentioned Regulations.” I believe that all laws and regulations have dynamic effects and that to fully appreciate the true impact of any particular public policy, you must always closely investigate the potential opportunity costs and unintended consequences associated with those policies. Because all too often laws and regulations are hastily put on the books with the very best of intentions in mind, only to later be shown to produce the opposite of what was intended.
Today’s case in point comes from a Wall Street Journal article by Rachel Bachman and it involves how the growing wave of cycling helmet laws are having a net negative impact on public health because they discourage ridership in the aggregate. Thus, those potential riders are then either (a) just less active overall or (b) driving their cars to get where they need to go. And both of those results are, ultimately, riskier than cycling without a helmet. For that reason, Bachman reports, cycling advocates “are pushing back against mandatory bike-helmet laws in the U.S. and elsewhere. They say mandatory helmet laws, particularly for adults, make cycling less convenient and seem less safe, thus hindering the larger public-health gains of more people riding bikes.” Supporting evidence comes from this 2012 paper in the journal Risk Analysis by Piet de Jong, a professor in the department of applied finance and actuarial studies at Sydney’s Macquarie University. His paper included an empirical model that showed how mandatory bike-helmet laws “have a net negative health impact.”
This strikes me as one of the very best examples of how to do dynamic benefit-cost analysis and show the full range of societal impacts associated with well-intentioned regulations. And it reminds me of the playground example I use in several of my papers: Laws and liability threats discouraged tall playground climbing structures in the ’80s and ’90s. Continue reading →
The big news out of Europe today is that the European Court of Justice (ECJ) has invalidated the 15-year old EU-US safe harbor agreement, which facilitated data transfers between the EU and US. American tech companies have relied on the safe harbor to do business in the European Union, which has more onerous data handling regulations than the US. [PDF summary of decision here.] Below I offer some quick thoughts about the decision and some of its potential unintended consequences.
#1) Another blow to new entry / competition in the EU: While some pundits are claiming this is a huge blow to big US tech firms, in reality, the irony of the ruling is that it will bolster the market power of the biggest US tech firms, because they are the only ones that will be able to afford the formidable compliance costs associated with the resulting regulatory regime. In fact, with each EU privacy decision, Google, Facebook, and other big US tech firms just get more dominant. Small firms just can’t comply with the EU’s expanding regulatory thicket. “It will involve lots of contracts between lots of parties and it’s going to be a bit of a nightmare administratively,” said Nicola Fulford, head of data protection at the UK law firm Kemp Little when commenting on the ruling to the BBC. “It’s not that we’re going to be negotiating them individually, as the legal terms are mostly fixed, but it does mean a lot more paperwork and they have legal implications.” And by driving up regulatory compliance costs and causing constant delays in how online business is conducted, the ruling will (again, on top of all the others) greatly limits entry and innovation by new, smaller players in the digital world. In essence, EU data regulations have already wiped out much of the digital competition in Europe and now this ruling finishes off any global new entrants who might have hoped of breaking in and offering competitive alternatives. These are the sorts of stories never told in antitrust circles: costly government rulings often solidify and extend the market dominance of existing companies. Dynamic effects matter. That is certainly going to be the case here. Continue reading →
I recently finished Learning by Doing: The Real Connection between Innovation, Wages, and Wealth, by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation. One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.
That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:
Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)
Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time. Continue reading →