I recently finishedLearning by Doing: The Real Connection between Innovation, Wages, and Wealth, by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation. One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.
That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:
Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)
Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time. Continue reading →
I wanted to draw your attention to yet another spectacular speech by Maureen K. Ohlhausen, a Commissioner with the Federal Trade Commission (FTC). I have written here before about Commissioner Ohlhausen’s outstanding speeches, but this latest one might be her best yet.
On Tuesday, Ohlhausen was speaking at U.S. Chamber of Commerce Foundation day-long event on “The Internet of Everything: Data, Networks and Opportunities.” The conference featured various keynote speakers and panels discussing, “the many ways that data and Internet connectiviting is changing the face of business and society.” (It was my honor to also be invited to deliver an address to the crowd that day.)
As with many of her other recent addresses, Commissioner Ohlhausen stressed why it is so important that policymakers “approach new technologies and new business models with regulatory humility.” Building on the work of the great Austrian economist F.A. Hayek, who won a Nobel prize in part for his work explaining the limits of our knowledge to plan societies and economies, Ohlhausen argues that: Continue reading →
On the whiteboard that hangs in my office, I have a giant matrix of technology policy issues and the various policy “threat vectors” that might end up driving regulation of particular technologies or sectors. Along with my colleagues at the Mercatus Center’s Technology Policy Program, we constantly revise this list of policy priorities and simultaneously make an (obviously quite subjective) attempt to put some weights on the potential policy severity associated with each threat of intervention. The matrix looks like this: [Sorry about the small fonts. You can click on the image to make it easier to see.]
I use 5 general policy concerns when considering the likelihood of regulatory intervention in any given area. Those policy concerns are:
privacy (reputation issues, fear of “profiling” & “discrimination,” amorphous psychological / cognitive harms);
safety (health & physical safety or, alternatively, child safety and speech / cultural concerns);
security (hacking, cybersecurity, law enforcement issues);
Make sure to watch this terrific little MR University video featuring my Mercatus Center colleague Don Boudreaux discussing what fueled the “Orgy of Innovation” we have witnessed over the past century. Don brings in one our our mutual heroes, the economic historian Deirdre McCloskey, who has coined the term “innovationism” to describe the phenomenal rise in innovation over the past couple hundred years. As I have noted in my essay on “Embracing a Culture of Permissionless Innovation,” McCloskey’s work highlights the essential that role that values—cultural attitudes, social norms, and political pronouncements—have played in influencing opportunities for entrepreneurialism, innovation, and long-term growth. Watch Don’s video for more details:
Last week while I was visiting the Silicon Valley area, it was my pleasure to visit the venture capital firm of Andreessen Horowitz. While I was there, Sonal Chokshi was kind enough to invite me on the a16z podcast, which was focused on “Making the Case for Permissionless Innovation.” We had a great discussion on a wide range of disruptive technology policy issues (robotics, drones, driverless cars, medical technology, Internet of Things, crypto, etc.) and also talked about how innovators should approach Washington and public policymakers more generally. Our 23-minute conversation follows:
And for more reading on permissionless innovation more generally, see my book page.
I was delivering a lecture to a group of academics and students out in San Jose recently [see the slideshow here] and someone in the crowd asked me to send them a list of some of the many books I had mentioned during my talk, which was about future policy clashes over various emerging technologies. I cut the list down to the five books that I believe best frame the nature of debates over innovation and technology policy. They are:
Virginia Postrel, The Future and Its Enemies (1998): Contrasts the conflicting worldviews of “dynamism” and “stasis” and shows how the tensions between these two visions influences debates over technological progress. No book has had a greater influence on my own thinking about debates over innovation and progress.
Joel Mokyr, Lever of Riches: Technological Creativity and Economic Progress (1990): One of the finest histories of technological innovation ever penned. It’s certainly my favorite. It explores how earlier technologies evolved and created social and economic tensions.
Matt Ridley, The Rational Optimist: How Prosperity Evolves (2010): Makes the case for “rational optimism” in debates about technology and innovation and takes on pessimistic critics of technological change.
Larry Downes, The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age (2009): Explains how lawmaking in the information age is inexorably governed by the “law of disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.” That fact, he explains, has profound ramifications for all technology policy debates going forward.
If you haven’t read these amazing books yet, add them to your collection right now! They are worth reading again and again. They will forever change the way you think about debates over technology and innovation.
It was my pleasure this week to be invited to deliver some comments at an event hosted by the Information Technology and Innovation Foundation (ITIF) to coincide with the release of their latest study, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies.” The goal of the new ITIF report, which was co-authored by Daniel Castro and Alan McQuinn, is to highlight the dangers associated with “the cycle of panic that occurs when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on.” (p. 1)
As Castro and McQuinn describe it, the privacy panic cycle “charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.” They divide this cycle into four phases: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On. Here’s how they depict it in an image:
Hal Singer has discovered that total wireline broadband investment has declined 12% in the first half of 2015 compared to the first half of 2014. The net decrease was $3.3 billion across the six largest ISPs. As far as what could have caused this, the Federal Communications Commission’s Open Internet Order “is the best explanation for the capex meltdown,” Singer writes.
Despite numerous warnings from economists and other experts, the FCC confidently predicted in paragraph 40 of the Open Internet Order that “recent events have demonstrated that our rules will not disrupt capital markets or investment.”
Chairman Wheeler acknowledged that diminished investment in the network is unacceptable when the commission adopted the Open Internet Order by a partisan 3-2 vote. His statement said:
Our challenge is to achieve two equally important goals: ensure incentives for private investment in broadband infrastructure so the U.S. has world-leading networks and ensure that those networks are fast, fair, and open for all Americans. (emphasis added.)
The Open Internet Order achieves the first goal, he claimed, by “providing certainty for broadband providers and the online marketplace.” (emphasis added.)
Yet by asserting jurisdiction over interconnection for the first time and by adding a vague new catchall “general conduct” rule, the Order is a recipe for uncertainty. When asked at a February press conference to provide some examples of how the general conduct rule might be used to stop “new and novel threats” to the Internet, Wheeler admitted “we don’t really know…we don’t know where things go next…” This is not certainty.
As Singer points out, the FCC has speculated that the Open Internet rules would generate only $100 million in annual benefits for content providers compared to the reduction of investment in the network of at least $3.3 billion since last year. While the rules obviously won’t survive cost-benefit analysis, I’m not sure they will survive some preliminary questions and even get to a cost-benefit analysis stage. Continue reading →
As FCC Commissioner Jessica Rosenworcel said of the Internet, “It is our printing press.” Unfortunately, for First Amendment purposes, regulators and courts treat our modern printing presses — electronic media — very differently from the traditional ones. Therefore, there is persistent political and activist pressure on regulators to rule that Internet intermediaries — like social networks and search engines — are not engaging in constitutionally-protected speech.
Most controversial is the idea that, as content creators and curators, Internet service providers are speakers with First Amendment rights. The FCC’s 2015 Open Internet Order designates ISPs as common carriers and generally prohibits ISPs from blocking Internet content. The agency asserts outright that ISPs “are not speakers.” These Title II rules may be struck down on procedural grounds, but the First Amendment issues pose a significant threat to the new rules.
ISPs are Speakers
Courts and Congress, as explained below, have long recognized that ISPs possess editorial discretion. Extensive ISP filtering was much more common in the 1990s but still exists today. Take JNet and DNet. These ISPs block large portions of Internet content that may violate religious principles. They also block neutral services like gaming and video if the subscriber wishes. JNet offers several services, including DSL Internet access, and markets itself to religious Jews. It is server-based (not client-based) and offers several types of filters, including application-based blocking, blacklists, and whitelists. Similarly, DNet, targeted mostly to Christian families in the Carolinas, offers DSL and wireless server-based filtering of content like pornography and erotic material. A strict no-blocking rule on the “last mile” access connection, which most net neutrality proponents want enforced, would prohibit these types of services. Continue reading →
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →