Innovation & Entrepreneurship

retainerAs “software eats the world,” the reach of the Digital Revolution continues to expand to far-flung fields and sectors. The ramifications of this are tremendously exciting but at times can also be a little bit frightening.

Consider this recent Washington Post headline: “A College Kid Spends $60 to Straighten His Own Teeth. What Could Possibly Go Wrong?” Matt McFarland of the Post reports that, “A college student has received a wealth of interest in his dental work after publishing an account of straightening his own teeth for $60.” The student at the New Jersey Institute of Technology, “had no dentistry experience when he decided to create plastic aligners to improve his smile,” but was able to use a 3D printer and laser scanner on campus to accomplish the job. “After publishing before-and-after pictures of his teeth this month, [the student] has received hundreds of requests from strangers, asking him to straighten their teeth.”

McFarland cites many medical professionals who are horrified at the prospect of patients taking their health decisions into own hands and engaging in practices that could be dangerous to themselves and others. Some of the licensed practitioners cited in the story come across as just being bitter losers as they face the potential for the widespread disintermediation of their profession. After all, they currently charge thousands of dollars for various dental procedures and equipment. Thanks to technological innovations, however, those costs could soon plummet, which could significantly undercut their healthy margins on dental services and equipment. On the other hand, these professionals have a fair point about untrained citizens doing their own dental work or giving others the ability to do so. Things certainly could go horribly wrong.

This is another interesting case study related to the subject of a forthcoming Mercatus paper as well as an upcoming law review article on 3D printing of mine, both of which pose the following question: What happens when radically decentralized technological innovation (such as 3D printing) gives people a de facto “right to try” new medicines and medical devices? Continue reading →

Christopher GiancarloU.S. Commodity Futures Trading Commission (CFTC) Commissioner J. Christopher Giancarlo delivered an amazing address this week before the Depository Trust & Clearing Corporation 2016 Blockchain Symposium. The title of his speech was “Regulators and the Blockchain: First, Do No Harm,” and it will go down as the definitive early statement about how policymakers can apply a principled, innovation-enhancing policy paradigm to distributed ledger technology (DLT) or “blockchain” applications.

“The potential applications of this technology are being widely imagined and explored in ways that will benefit market participants, consumers and governments alike,” Giancarlo noted in his address. But in order for that to happen, he said, we have to get policy right. “It is time again to remind regulators to ‘do no harm,'” he argued, and he continued on to note that

The United States’ global leadership in technological innovation of the Internet was built hand-in-hand with its enlightened “do no harm” regulatory framework. Yet, when the Internet developed in the mid-1990s, none of us could have imagined its capabilities that we take for granted today. Fortunately, policymakers had the foresight to create a regulatory environment that served as a catalyst rather than a choke point for innovation. Thanks to their forethought and restraint, Internet-based applications have revolutionized nearly every aspect of human life, created millions of jobs and increased productivity and consumer choice. Regulators must show that same forethought and restraint now [for the blockchain].

What Giancarlo is referring to is the approach that the U.S. government adopted toward the Internet and digital networks in the mid-1990s. You can think of this vision as “permissionless innovation.” As I explain in my recent book of the same title, permissionless innovation refers to the notion that we should generally be free to experiment and learn new and better ways of doing things through ongoing trial-and-error. Continue reading →

[This is an excerpt from Chapter 6 of the forthcoming 2nd edition of my book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” due out later this month. I was presenting on these issues at today’s New America Foundation “Cybersecurity for a New America” event, so I thought I would post this now.  To learn more about the contrast between “permissionless innovation” and “precautionary principle” thinking, please consult the earlier edition of my book or see this blog post.]


 

Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals.[1] But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.

In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,”[2] a “cyber cold war,”[3] or even a “cyber 9/11.”[4] These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code.[5] Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”[6]

Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security.[7] Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”[8] Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security. Continue reading →

The success of the Internet and the modern digital economy was due to its open, generative nature, driven by the ethos of “permissionless innovation.” A “light-touch” policy regime helped make this possible. Of particular legal importance was the immunization of online intermediaries from punishing forms of liability associated with the actions of third parties.

As “software eats the world” and the digital revolution extends its reach to the physical world, policymakers should extend similar legal protections to other “generative” tools and platforms, such as robotics, 3D printing, and virtual reality.

In other words, we need a Section 230 for the “maker” movement. Continue reading →

I wanted to draw your attention to this important address on online platform regulation by Alex Chisholm, the head of UK’s Competition and Markets Authority. That’s the non-ministerial department in the UK responsible for competition policy issues. Chisholm delivered the address on October 27th at the Bundesnetzagentur conference in Bonn. It’s a terrific speech that other policymakers would be wise to read and mimic to ensure that antitrust and competition policy decisions don’t derail the many benefits of the Information Revolution.

“Today, as regulators, we have the responsibility but also the great historical privilege of playing an influential role in the deployment throughout the economy of the latest of these defining technological eras,” Chisholm began. “As regulators, we must try to minimise the inevitable mismatch between how we’ve done things before and the opportunities and risks of the new,” he argued.

He continued on to specify three recommendations for those crafting policy on this front: Continue reading →

Those of us with deep reservations about the push for ever more unlicensed spectrum are having many of our fears realized with the new resistance to novel technologies using unlicensed spectrum. By law unlicensed spectrum users have no rights to their spectrum; unlicensed spectrum is a managed commons. In practice, however, existing users frequently act as if they own their spectrum and they can exclude others. By entertaining these complaints, the FCC simply encourages NIMBYism in unlicensed spectrum.

The general idea behind unlicensed spectrum is that by providing a free spectrum commons to any device maker who complies with certain simple rules (namely, Part 15’s low power operation requirement), device makers will develop wireless services that would never have developed if the device makers had to shell out millions for licensed spectrum. For decades, unlicensed spectrum has stimulated development and sale of millions of consumer devices, including cordless phones, Bluetooth devices, wifi access points, RC cars, and microwave ovens.

Now, however, many device makers are getting nervous about new entrants. For instance, Globalstar is developing a technology, TLPS, based on wifi standards that will use some unlicensed spectrum at 2.4 GHz and mobile carriers would like to market an unlicensed spectrum technology, LTE-U, based on 4G LTE standards that will use spectrum at 5 GHz.

This resistance from various groups and spectrum incumbents, who fear interference in “their” spectrum if these new technologies catch on, was foreseeable, which makes these intractable conflicts even more regrettable. As Prof. Tom Hazlett wrote in a 2001 essay, long before today’s conflicts, when it comes to unlicensed devices, “economic success spells its own demise.” Hazlett noted, “Where an unlicensed firm successfully innovates, open access guarantees imitation. This not only results in competition…but may degrade wireless emissions — perhaps severely.”

On the other hand, the many technical filings about potential interference to existing unlicensed devices are red herrings. Prospective device makers in these unlicensed bands have no duty to protect existing users. Part 15 rules say that unlicensed users like wifi and Bluetooth “shall not be deemed to have any vested or recognizable right to continued use of any given frequency by virtue of prior registration or certification of equipment” and that “interference must be accepted.” These rules, however, put the FCC in a self-created double bind: the agency provides no interference protection to existing users but its open access policy makes interference conflicts likely. Continue reading →

The big news out of Europe today is that the European Court of Justice (ECJ) has invalidated the 15-year old EU-US safe harbor agreement, which facilitated data transfers between the EU and US. American tech companies have relied on the safe harbor to do business in the European Union, which has more onerous data handling regulations than the US. [PDF summary of decision here.] Below I offer some quick thoughts about the decision and some of its potential unintended consequences.

#1) Another blow to new entry / competition in the EU: While some pundits are claiming this is a huge blow to big US tech firms, in reality, the irony of the ruling is that it will bolster the market power of the biggest US tech firms, because they are the only ones that will be able to afford the formidable compliance costs associated with the resulting regulatory regime. In fact, with each EU privacy decision, Google, Facebook, and other big US tech firms just get more dominant. Small firms just can’t comply with the EU’s expanding regulatory thicket. “It will involve lots of contracts between lots of parties and it’s going to be a bit of a nightmare administratively,” said Nicola Fulford, head of data protection at the UK law firm Kemp Little when commenting on the ruling to the BBC. “It’s not that we’re going to be negotiating them individually, as the legal terms are mostly fixed, but it does mean a lot more paperwork and they have legal implications.” And by driving up regulatory compliance costs and causing constant delays in how online business is conducted, the ruling will (again, on top of all the others) greatly limits entry and innovation by new, smaller players in the digital world. In essence, EU data regulations have already wiped out much of the digital competition in Europe and now this ruling finishes off any global new entrants who might have hoped of breaking in and offering competitive alternatives. These are the sorts of stories never told in antitrust circles: costly government rulings often solidify and extend the market dominance of existing companies. Dynamic effects matter. That is certainly going to be the case here. Continue reading →

I recently finished Learning by Doing: The Real Connection between Innovation, Wages, and Wealth, by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation. One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.

That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:

Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)

Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time. Continue reading →

commissioner-ohlhausenI wanted to draw your attention to yet another spectacular speech by Maureen K. Ohlhausen, a Commissioner with the Federal Trade Commission (FTC). I have written here before about Commissioner Ohlhausen’s outstanding speeches, but this latest one might be her best yet.

On Tuesday, Ohlhausen was speaking at U.S. Chamber of Commerce Foundation day-long event on “The Internet of Everything: Data, Networks and Opportunities.” The conference featured various keynote speakers and panels discussing, “the many ways that data and Internet connectiviting is changing the face of business and society.” (It was my honor to also be invited to deliver an address to the crowd that day.)

As with many of her other recent addresses, Commissioner Ohlhausen stressed why it is so important that policymakers “approach new technologies and new business models with regulatory humility.” Building on the work of the great Austrian economist F.A. Hayek, who won a Nobel prize in part for his work explaining the limits of our knowledge to plan societies and economies, Ohlhausen argues that: Continue reading →

Tech Policy Threat Matrix

by on September 24, 2015 · 1 comment

On the whiteboard that hangs in my office, I have a giant matrix of technology policy issues and the various policy “threat vectors” that might end up driving regulation of particular technologies or sectors. Along with my colleagues at the Mercatus Center’s Technology Policy Program, we constantly revise this list of policy priorities and simultaneously make an (obviously quite subjective) attempt to put some weights on the potential policy severity associated with each threat of intervention. The matrix looks like this: [Sorry about the small fonts. You can click on the image to make it easier to see.]

 

Tech Policy Issue Matrix 2015

I use 5 general policy concerns when considering the likelihood of regulatory intervention in any given area. Those policy concerns are:

  1. privacy (reputation issues, fear of “profiling” & “discrimination,” amorphous psychological / cognitive harms);
  2. safety (health & physical safety or, alternatively, child safety and speech / cultural concerns);
  3. security (hacking, cybersecurity, law enforcement issues);
  4. economic disruption (automation, job dislocation, sectoral disruptions); and,
  5. intellectual property (copyright and patent issues).

Continue reading →