Adam is a senior research fellow at the Mercatus Center at George Mason University. He previously served as President of the Progress & Freedom Foundation, Director of Telecom. Studies at the Cato Institute, and Fellow in Economic Policy at the Heritage Foundation.
I recently finishedLearning by Doing: The Real Connection between Innovation, Wages, and Wealth, by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation. One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.
That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:
Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)
Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time. Continue reading →
I wanted to draw your attention to yet another spectacular speech by Maureen K. Ohlhausen, a Commissioner with the Federal Trade Commission (FTC). I have written here before about Commissioner Ohlhausen’s outstanding speeches, but this latest one might be her best yet.
On Tuesday, Ohlhausen was speaking at U.S. Chamber of Commerce Foundation day-long event on “The Internet of Everything: Data, Networks and Opportunities.” The conference featured various keynote speakers and panels discussing, “the many ways that data and Internet connectiviting is changing the face of business and society.” (It was my honor to also be invited to deliver an address to the crowd that day.)
As with many of her other recent addresses, Commissioner Ohlhausen stressed why it is so important that policymakers “approach new technologies and new business models with regulatory humility.” Building on the work of the great Austrian economist F.A. Hayek, who won a Nobel prize in part for his work explaining the limits of our knowledge to plan societies and economies, Ohlhausen argues that: Continue reading →
On the whiteboard that hangs in my office, I have a giant matrix of technology policy issues and the various policy “threat vectors” that might end up driving regulation of particular technologies or sectors. Along with my colleagues at the Mercatus Center’s Technology Policy Program, we constantly revise this list of policy priorities and simultaneously make an (obviously quite subjective) attempt to put some weights on the potential policy severity associated with each threat of intervention. The matrix looks like this: [Sorry about the small fonts. You can click on the image to make it easier to see.]
I use 5 general policy concerns when considering the likelihood of regulatory intervention in any given area. Those policy concerns are:
privacy (reputation issues, fear of “profiling” & “discrimination,” amorphous psychological / cognitive harms);
safety (health & physical safety or, alternatively, child safety and speech / cultural concerns);
security (hacking, cybersecurity, law enforcement issues);
Make sure to watch this terrific little MR University video featuring my Mercatus Center colleague Don Boudreaux discussing what fueled the “Orgy of Innovation” we have witnessed over the past century. Don brings in one our our mutual heroes, the economic historian Deirdre McCloskey, who has coined the term “innovationism” to describe the phenomenal rise in innovation over the past couple hundred years. As I have noted in my essay on “Embracing a Culture of Permissionless Innovation,” McCloskey’s work highlights the essential that role that values—cultural attitudes, social norms, and political pronouncements—have played in influencing opportunities for entrepreneurialism, innovation, and long-term growth. Watch Don’s video for more details:
Last week while I was visiting the Silicon Valley area, it was my pleasure to visit the venture capital firm of Andreessen Horowitz. While I was there, Sonal Chokshi was kind enough to invite me on the a16z podcast, which was focused on “Making the Case for Permissionless Innovation.” We had a great discussion on a wide range of disruptive technology policy issues (robotics, drones, driverless cars, medical technology, Internet of Things, crypto, etc.) and also talked about how innovators should approach Washington and public policymakers more generally. Our 23-minute conversation follows:
And for more reading on permissionless innovation more generally, see my book page.
I was delivering a lecture to a group of academics and students out in San Jose recently [see the slideshow here] and someone in the crowd asked me to send them a list of some of the many books I had mentioned during my talk, which was about future policy clashes over various emerging technologies. I cut the list down to the five books that I believe best frame the nature of debates over innovation and technology policy. They are:
Virginia Postrel, The Future and Its Enemies (1998): Contrasts the conflicting worldviews of “dynamism” and “stasis” and shows how the tensions between these two visions influences debates over technological progress. No book has had a greater influence on my own thinking about debates over innovation and progress.
If you haven’t read these amazing books yet, add them to your collection right now! They are worth reading again and again. They will forever change the way you think about debates over technology and innovation.
It was my pleasure this week to be invited to deliver some comments at an event hosted by the Information Technology and Innovation Foundation (ITIF) to coincide with the release of their latest study, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies.” The goal of the new ITIF report, which was co-authored by Daniel Castro and Alan McQuinn, is to highlight the dangers associated with “the cycle of panic that occurs when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on.” (p. 1)
As Castro and McQuinn describe it, the privacy panic cycle “charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.” They divide this cycle into four phases: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On. Here’s how they depict it in an image:
I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.
But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.
The Costs of Control
But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc. Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.
I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration. Continue reading →
Yesterday, the White House Council of Economic Advisers released an important new report entitled, “Occupational Licensing: A Framework for Policymakers.” (PDF, 76 pgs.) The report highlighted the costs that outdated or unneeded licensing regulations can have on diverse portions of the citizenry. Specifically, the report concluded that:
the current licensing regime in the United States also creates substantial costs, and often the requirements for obtaining a license are not in sync with the skills needed for the job. There is evidence that licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines. Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing.
The report supported these conclusions with a wealth of evidence. In that regard, I was pleased to see that research from Mercatus Center-affiliated scholars was cited in the White House report (specifically on pg. 34). Mercatus Center scholars have repeatedly documented the costs of occupational licensing and offered suggestions for how to reform or eliminate unnecessary licensing practices. Most recently, my colleagues and I have explored the costs of licensing restrictions for new sharing economy platforms and innovators. The White House report cited, for example, the recently-released Mercatus paper on “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem,’” which I co-authored with Christopher Koopman, Anne Hobson, and Chris Kuiper. And it also cited a new essay by Tyler Cowen and Alex Tabarrok on “The End of Asymmetric Information.” Continue reading →