Technology, Business & Cool Toys

The Mercatus Center at George Mason University has just released a new paper on, “Artificial Intelligence and Public Policy,” which I co-authored with Andrea Castillo O’Sullivan and Raymond Russell. This 54-page paper can be downloaded via the Mercatus website, SSRN, or ResearchGate. Here is the abstract:

There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later.

Whatever you want to call them–autonomous vehicles, driverless cars, automated systems, unmanned systems, connected cars, piloteless vehicles, etc.–the life-saving potential of this new class of technologies has been shown to be potentially enormous. I’ve spent a lot of time researching and writing about these issues, and I have yet to see any study forecast the opposite (i.e., a net loss of lives due to these technologies.) While the estimated life savings vary, the numbers are uniformly positive across the board, and not just in terms of lives saved, but also for reductions in other injuries, property damage, and aggregate social costs associated with vehicular accidents more generally.

To highlight these important and consistent findings, I asked my research assistant Melody Calkins to help me compile a list of recent studies on this issue and summarize the key takeaways of each one regarding at least the potential for lives saved. The studies and findings are listed below in reverse chronological order of publication. I may try to add to this over time, so please feel free to shoot me suggested updates as they become available.

Needless to say, these findings would hopefully have some bearing on public policy toward these technologies. Namely, we should be taking steps to accelerate this transition and removing roadblocks to the driverless car revolution because we could be talking about the biggest public health success story of our lifetime if we get policy right here. Every day matters because each day we delay this transition is another day during which 90 people die in car crashes and more than 6,500 will be injured. And sadly, those numbers are going up, not down. According to the National Highway Traffic Safety Administration (NHTSA), auto crashes and the roadway death toll is climbing for the first time in decades. Meanwhile, the agency estimated that 94 percent of all crashes are attributable to human error. We have the potential to do something about this tragedy, but we have to get public policy right. Delay is not an option.

Continue reading →

By Brent Skorup and Melody Calkins

Tech-optimists predict that drones and small aircraft may soon crowd US skies. An FAA administrator predicted that by 2020 tens of thousands of drones would be in US airspace at any one time. Further, over a dozen companies, including Uber, are building vertical takeoff and landing (VTOL) aircraft that could one day shuttle people point-to-point in urban areas. Today, low-altitude airspace use is episodic (helicopters, ultralights, drones) and with such light use, the low-altitude airspace is shared on an ad hoc basis with little air traffic management. Coordinating thousands of aircraft in low-altitude flight, however, demands a new regulatory framework.

Why not auction off low-altitude airspace for exclusive use?

There are two basic paradigms for resource use: open access and exclusive ownership. Most high-altitude airspace is lightly used and the open access regime works tolerably well because there are a small number of players (airline operators and the government) and fixed routes. Similarly, Class G airspace—which varies by geography but is generally the airspace from the surface to 700 feet above ground—is uncontrolled and virtually open access.

Valuable resources vary immensely in their character–taxi medallions, real estate, radio spectrum, intellectual property, water–and a resource use paradigm, once selected requires iteration and modification to ensure productive use. “The trick,” Prof. Richard Epstein notes, “is to pick the right initial point to reduce the stress on making these further adjustments.” If indeed dozens of operators will be vying for variable drone and VTOL routes in hundreds of local markets, exclusive use models could create more social benefits and output than open access and regulatory management. NASA is exploring complex coordination systems in this airspace but, rather than agency permissions, lawmakers should consider using property rights and the price mechanism.

The initial allocation of airspace could be determined by auction. An agency, probably the FAA, would:

  1. Identify and define geographic parcels of Class G airspace;
  2. Auction off the parcels to any party (private corporations, local governments, non-commercial stakeholders, or individual users) for a term of years with an expectation of renewal; and
  3. Permit the sale, combination, and subleasing of those parcels

The likely alternative scenario—regulatory allocation and management of airspace–derives from historical precedent in aviation and spectrum policy:

  1. First movers and the politically powerful acquire de facto control of low-altitude airspace,
  2. Incumbents and regulators exclude and inhibit newcomers and innovators,
  3. The rent-seeking and resource waste becomes unendurable for lawmakers, and
  4. Market-based reforms are slowly and haphazardly introduced.

For instance, after demand for commercial flights took off in the 1960s, a command-and-control quota system was created for crowded Northeast airports. Takeoff and landing rights, called “slots,” were assigned to early airlines but regulators did not allow airlines to sell those rights. The anticompetitive concentration and hoarding of airport slots at terminals is still being slowly unraveled by Congress and the FAA to this day. There’s a similar story for government assignment of spectrum over decades, as explained in Thomas Hazlett’s excellent new book, The Political Spectrum.

The benefit of an auction, plus secondary markets, is that the resource is generally put to its highest-valued use. Secondary markets and subleasing also permit latecomers and innovators to gain resource access despite lacking an initial assignment and political power. Further, exclusive use rights would also provide VTOL operators (and passengers) the added assurance that routes would be “clear” of potential collisions. (A more regulatory regime might provide that assurance but likely via complex restrictions on airspace use.) Airspace rights would be a new cost for operators but exclusive use means operators can economize on complex sensors, other safety devices, and lobbying costs. Operators would also possess an asset to sublease and monetize.

Another bonus (from the government’s point of view) is that the sale of Class G airspace can provide government revenue. Revenue would be slight at first but could prove lucrative once there’s substantial commercial interest. The Federal government, for instance, auctions off its usage rights for grazing, oil and gas retrieval, radio spectrum, mineral extraction, and timber harvesting. Spectrum auctions alone have raised over $100 billion for the Treasury since they began in 1994.

I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!

In his wonderful 2013 book, Smarter Than You Think: How Technology Is Changing Our Minds for the BetterClive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”

Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.

Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! Continue reading →

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge. Continue reading →

Today marks the 10th anniversary of the launch of the Apple iPhone. With all the headlines being written today about how the device changed the world forever, it is easy to forget that before its launch, plenty of experts scoffed at the idea that Steve Jobs and Apple had any chance of successfully breaking into the seemingly mature mobile phone market.

After all, those were the days when BlackBerry, Palm, Motorola, and Microsoft were on everyone’s minds. Perhaps, then, it wasn’t so surprising to hear predictions like these leading up to and following the launch of the iPhone:

  • In December 2006, Palm CEO Ed Colligan summarily dismissed the idea that a traditional personal computing company could compete in the smartphone business. “We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.”
  • In January 2007, Microsoft CEO Steve Ballmer laughed off the prospect of an expensive smartphone without a keyboard having a chance in the marketplace as follows: “Five hundred dollars? Fully subsidized? With a plan? I said that’s the most expensive phone in the world and it doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good e-mail machine.”
  • In March 2007, computing industry pundit John C. Dvorak argued that “Apple should pull the plug on the iPhone” since “There is no likelihood that Apple can be successful in a business this competitive.” Dvorak believed the mobile handset business was already locked up by the era’s major players. “This is not an emerging business. In fact it’s gone so far that it’s in the process of consolidation with probably two players dominating everything, Nokia Corp. and Motorola Inc.”

Continue reading →

This week, my Mercatus Center colleague Andrea Castillo and I filed comments with the White House Office of Science and Technology Policy (OSTP) in a proceeding entitled, “Preparing for the Future of Artificial Intelligence.” For more background on this proceeding and the accompanying workshops that OSTP has hosted on this issue, see this White House site.

In our comments, Andrea and I make the case for prudence, patience, and a continuing embrace of “permissionless innovation” as the appropriate policy framework for artificial intelligence (AI) technologies at this nascent stage of their development. Down below, I have pasted our full comments, which were limited to just 2,000 words as required by the OSTP. But we plan on releasing a much longer report on these issues in coming months. You can find the full version of filing that includes footnotes here.

Continue reading →

Since the release of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, it has been my pleasure to be invited to speak to dozens of groups about the future of technology policy debates. In the process, I have developed and continuously refined a slide show entitled, “Permissionless Innovation’ & the Clash of Visions over Emerging Technologies.” After delivering this talk again twice last week, I figured I would post the latest slide deck I’m using for the presentation. It’s embedded below or it can be found at the link above.

Along with colleagues at the Mercatus Center at George Mason University, I am releasing two major new reports today dealing with the regulation of the sharing economy. The first report is a 20-page filing to the Federal Trade Commission that we are submitting to the agency for its upcoming June 9th workshop on “The “Sharing” Economy: Issues Facing Platforms, Participants, and Regulators.” We have been invited to participate in that event and I will be speaking on the fourth panel of the workshop. The filing I am submitting today for that workshop was co-authored with my Mercatus colleagues Christopher Koopman and Matt Mitchell.

The second report we are releasing today is a new 47-page working paper entitled, “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem.'” This study was co-authored with my Mercatus colleagues Christopher Koopman, Anne Hobson, and Chris Kuiper.

I will summarize each report briefly here. Continue reading →

do not panicOn Sunday night, 60 Minutes aired a feature with the ominous title, “Nobody’s Safe on the Internet,” that focused on connected car hacking and Internet of Things (IoT) device security. It was followed yesterday morning by the release of a new report from the office of Senator Edward J. Markey (D-Mass) called Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, which focused on connected car security and privacy issues. Employing more than a bit of techno-panic flare, these reports basically suggest that we’re all doomed.

On 60 Minutes, we meet former game developer turned Department of Defense “cyber warrior” Dan (“call me DARPA Dan”) Kaufman–and learn his fears of the future: “Today, all the devices that are on the Internet [and] the ‘Internet of Things’ are fundamentally insecure. There is no real security going on. Connected homes could be hacked and taken over.”

60 Minutes reporter Lesley Stahl, for her part, is aghast. “So if somebody got into my refrigerator,” she ventures, “through the internet, then they would be able to get into everything, right?” Replies DARPA Dan, “Yeah, that’s the fear.” Prankish hackers could make your milk go bad, or hack into your garage door opener, or even your car.

This segues to a humorous segment wherein Stahl takes a networked car for a spin. DARPA Dan and his multiple research teams have been hard at work remotely programming this vehicle for years. A “hacker” on DARPA Dan’s team proceeded to torment poor Lesley with automatic windshield wiping, rude and random beeps, and other hijinks. “Oh my word!” exclaims Stahl. Continue reading →