Smart Device Paranoia

by on October 5, 2015 · 0 comments

The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future, the ACLU tech blog.

Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:

When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.

While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.

Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.

More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples. In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.

Continue reading →

I recently finished Learning by Doing: The Real Connection between Innovation, Wages, and Wealth, by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation. One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.

That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:

Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)

Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time. Continue reading →

commissioner-ohlhausenI wanted to draw your attention to yet another spectacular speech by Maureen K. Ohlhausen, a Commissioner with the Federal Trade Commission (FTC). I have written here before about Commissioner Ohlhausen’s outstanding speeches, but this latest one might be her best yet.

On Tuesday, Ohlhausen was speaking at U.S. Chamber of Commerce Foundation day-long event on “The Internet of Everything: Data, Networks and Opportunities.” The conference featured various keynote speakers and panels discussing, “the many ways that data and Internet connectiviting is changing the face of business and society.” (It was my honor to also be invited to deliver an address to the crowd that day.)

As with many of her other recent addresses, Commissioner Ohlhausen stressed why it is so important that policymakers “approach new technologies and new business models with regulatory humility.” Building on the work of the great Austrian economist F.A. Hayek, who won a Nobel prize in part for his work explaining the limits of our knowledge to plan societies and economies, Ohlhausen argues that: Continue reading →

Tech Policy Threat Matrix

by on September 24, 2015 · 1 comment

On the whiteboard that hangs in my office, I have a giant matrix of technology policy issues and the various policy “threat vectors” that might end up driving regulation of particular technologies or sectors. Along with my colleagues at the Mercatus Center’s Technology Policy Program, we constantly revise this list of policy priorities and simultaneously make an (obviously quite subjective) attempt to put some weights on the potential policy severity associated with each threat of intervention. The matrix looks like this: [Sorry about the small fonts. You can click on the image to make it easier to see.]


Tech Policy Issue Matrix 2015

I use 5 general policy concerns when considering the likelihood of regulatory intervention in any given area. Those policy concerns are:

  1. privacy (reputation issues, fear of “profiling” & “discrimination,” amorphous psychological / cognitive harms);
  2. safety (health & physical safety or, alternatively, child safety and speech / cultural concerns);
  3. security (hacking, cybersecurity, law enforcement issues);
  4. economic disruption (automation, job dislocation, sectoral disruptions); and,
  5. intellectual property (copyright and patent issues).

Continue reading →

Make sure to watch this terrific little MR University video featuring my Mercatus Center colleague Don Boudreaux discussing what fueled the “Orgy of Innovation” we have witnessed over the past century. Don brings in one our our mutual heroes, the economic historian Deirdre McCloskey, who has coined the term “innovationism” to describe the phenomenal rise in innovation over the past couple hundred years. As I have noted in my essay on “Embracing a Culture of Permissionless Innovation,” McCloskey’s work highlights the essential that role that values—cultural attitudes, social norms, and political pronouncements—have played in influencing opportunities for entrepreneurialism, innovation, and long-term growth. Watch Don’s video for more details:

Last week while I was visiting the Silicon Valley area, it was my pleasure to visit the venture capital firm of Andreessen Horowitz. While I was there, Sonal Chokshi was kind enough to invite me on the a16z podcast, which was focused on “Making the Case for Permissionless Innovation.” We had a great discussion on a wide range of disruptive technology policy issues (robotics, drones, driverless cars, medical technology, Internet of Things, crypto, etc.) and also talked about how innovators should approach Washington and public policymakers more generally. Our 23-minute conversation follows:

And for more reading on permissionless innovation more generally, see my book page.

I was delivering a lecture to a group of academics and students out in San Jose recently [see the slideshow here] and someone in the crowd asked me to send them a list of some of the many books I had mentioned during my talk, which was about future policy clashes over various emerging technologies. I cut the list down to the five books that I believe best frame the nature of debates over innovation and technology policy. They are:

If you haven’t read these amazing books yet, add them to your collection right now! They are worth reading again and again. They will forever change the way you think about debates over technology and innovation.

5 innovation book covers

Since the release of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, it has been my pleasure to be invited to speak to dozens of groups about the future of technology policy debates. In the process, I have developed and continuously refined a slide show entitled, “Permissionless Innovation’ & the Clash of Visions over Emerging Technologies.” After delivering this talk again twice last week, I figured I would post the latest slide deck I’m using for the presentation. It’s embedded below or it can be found at the link above.

It was my pleasure this week to be invited to deliver some comments at an event hosted by the Information Technology and Innovation Foundation (ITIF) to coincide with the release of their latest study, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies.” The goal of the new ITIF report, which was co-authored by Daniel Castro and Alan McQuinn, is to highlight the dangers associated with “the cycle of panic that occurs when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on.” (p. 1)

As Castro and McQuinn describe it, the privacy panic cycle “charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.” They divide this cycle into four phases: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On. Here’s how they depict it in an image:

Privacy Panic Cycle - 1


Continue reading →

Hal Singer has discovered that total wireline broadband investment has declined 12% in the first half of 2015 compared to the first half of 2014.  The net decrease was $3.3 billion across the six largest ISPs.  As far as what could have caused this, the Federal Communications Commission’s Open Internet Order “is the best explanation for the capex meltdown,” Singer writes.

Despite numerous warnings from economists and other experts, the FCC confidently predicted in paragraph 40 of the Open Internet Order that “recent events have demonstrated that our rules will not disrupt capital markets or investment.”

Chairman Wheeler acknowledged that diminished investment in the network is unacceptable when the commission adopted the Open Internet Order by a partisan 3-2 vote.  His statement said:

Our challenge is to achieve two equally important goals: ensure incentives for private investment in broadband infrastructure so the U.S. has world-leading networks and ensure that those networks are fast, fair, and open for all Americans. (emphasis added.)

The Open Internet Order achieves the first goal, he claimed, by “providing certainty for broadband providers and the online marketplace.” (emphasis added.)

Yet by asserting jurisdiction over interconnection for the first time and by adding a vague new catchall “general conduct” rule, the Order is a recipe for uncertainty.  When asked at a February press conference to provide some examples of how the general conduct rule might be used to stop “new and novel threats” to the Internet, Wheeler admitted “we don’t really know…we don’t know where things go next…”  This is not certainty.

As Singer points out, the FCC has speculated that the Open Internet rules would generate only $100 million in annual benefits for content providers compared to the reduction of investment in the network of at least $3.3 billion since last year.  While the rules obviously won’t survive cost-benefit analysis, I’m not sure they will survive some preliminary questions and even get to a cost-benefit analysis stage. Continue reading →