Whatever you want to call them–autonomous vehicles, driverless cars, automated systems, unmanned systems, connected cars, piloteless vehicles, etc.–the life-saving potential of this new class of technologies has been shown to be potentially enormous. I’ve spent a lot of time researching and writing about these issues, and I have yet to see any study forecast the opposite (i.e., a net loss of lives due to these technologies.) While the estimated life savings vary, the numbers are uniformly positive across the board, and not just in terms of lives saved, but also for reductions in other injuries, property damage, and aggregate social costs associated with vehicular accidents more generally.

To highlight these important and consistent findings, I asked my research assistant Melody Calkins to help me compile a list of recent studies on this issue and summarize the key takeaways of each one regarding at least the potential for lives saved. The studies and findings are listed below in reverse chronological order of publication. I may try to add to this over time, so please feel free to shoot me suggested updates as they become available.

Needless to say, these findings would hopefully have some bearing on public policy toward these technologies. Namely, we should be taking steps to accelerate this transition and removing roadblocks to the driverless car revolution because we could be talking about the biggest public health success story of our lifetime if we get policy right here. Every day matters because each day we delay this transition is another day during which 90 people die in car crashes and more than 6,500 will be injured. And sadly, those numbers are going up, not down. According to the National Highway Traffic Safety Administration (NHTSA), auto crashes and the roadway death toll is climbing for the first time in decades. Meanwhile, the agency estimated that 94 percent of all crashes are attributable to human error. We have the potential to do something about this tragedy, but we have to get public policy right. Delay is not an option.

Continue reading →

By Brent Skorup and Melody Calkins

Tech-optimists predict that drones and small aircraft may soon crowd US skies. An FAA administrator predicted that by 2020 tens of thousands of drones would be in US airspace at any one time. Further, over a dozen companies, including Uber, are building vertical takeoff and landing (VTOL) aircraft that could one day shuttle people point-to-point in urban areas. Today, low-altitude airspace use is episodic (helicopters, ultralights, drones) and with such light use, the low-altitude airspace is shared on an ad hoc basis with little air traffic management. Coordinating thousands of aircraft in low-altitude flight, however, demands a new regulatory framework.

Why not auction off low-altitude airspace for exclusive use?

There are two basic paradigms for resource use: open access and exclusive ownership. Most high-altitude airspace is lightly used and the open access regime works tolerably well because there are a small number of players (airline operators and the government) and fixed routes. Similarly, Class G airspace—which varies by geography but is generally the airspace from the surface to 700 feet above ground—is uncontrolled and virtually open access.

Valuable resources vary immensely in their character–taxi medallions, real estate, radio spectrum, intellectual property, water–and a resource use paradigm, once selected requires iteration and modification to ensure productive use. “The trick,” Prof. Richard Epstein notes, “is to pick the right initial point to reduce the stress on making these further adjustments.” If indeed dozens of operators will be vying for variable drone and VTOL routes in hundreds of local markets, exclusive use models could create more social benefits and output than open access and regulatory management. NASA is exploring complex coordination systems in this airspace but, rather than agency permissions, lawmakers should consider using property rights and the price mechanism.

The initial allocation of airspace could be determined by auction. An agency, probably the FAA, would:

  1. Identify and define geographic parcels of Class G airspace;
  2. Auction off the parcels to any party (private corporations, local governments, non-commercial stakeholders, or individual users) for a term of years with an expectation of renewal; and
  3. Permit the sale, combination, and subleasing of those parcels

The likely alternative scenario—regulatory allocation and management of airspace–derives from historical precedent in aviation and spectrum policy:

  1. First movers and the politically powerful acquire de facto control of low-altitude airspace,
  2. Incumbents and regulators exclude and inhibit newcomers and innovators,
  3. The rent-seeking and resource waste becomes unendurable for lawmakers, and
  4. Market-based reforms are slowly and haphazardly introduced.

For instance, after demand for commercial flights took off in the 1960s, a command-and-control quota system was created for crowded Northeast airports. Takeoff and landing rights, called “slots,” were assigned to early airlines but regulators did not allow airlines to sell those rights. The anticompetitive concentration and hoarding of airport slots at terminals is still being slowly unraveled by Congress and the FAA to this day. There’s a similar story for government assignment of spectrum over decades, as explained in Thomas Hazlett’s excellent new book, The Political Spectrum.

The benefit of an auction, plus secondary markets, is that the resource is generally put to its highest-valued use. Secondary markets and subleasing also permit latecomers and innovators to gain resource access despite lacking an initial assignment and political power. Further, exclusive use rights would also provide VTOL operators (and passengers) the added assurance that routes would be “clear” of potential collisions. (A more regulatory regime might provide that assurance but likely via complex restrictions on airspace use.) Airspace rights would be a new cost for operators but exclusive use means operators can economize on complex sensors, other safety devices, and lobbying costs. Operators would also possess an asset to sublease and monetize.

Another bonus (from the government’s point of view) is that the sale of Class G airspace can provide government revenue. Revenue would be slight at first but could prove lucrative once there’s substantial commercial interest. The Federal government, for instance, auctions off its usage rights for grazing, oil and gas retrieval, radio spectrum, mineral extraction, and timber harvesting. Spectrum auctions alone have raised over $100 billion for the Treasury since they began in 1994.

[originally published on Plaintext on June 21, 2017.]

This summer, we celebrate the 20th anniversary of two developments that gave us the modern Internet as we know it. One was a court case that guaranteed online speech would flow freely, without government prior restraints or censorship threats. The other was an official White House framework for digital markets that ensured the free movement of goods and services online.

The result of these two vital policy decisions was an unprecedented explosion of speech freedoms and commercial opportunities that we continue to enjoy the benefits of twenty years later.

While it is easy to take all this for granted today, it is worth remembering that, in the long arc of human history, no technology or medium has more rapidly expanded the range of human liberties — both speech and commercial liberties — than the Internet and digital technologies. But things could have turned out much differently if not for the crucially important policy choices the United States made for the Internet two decades ago. Continue reading →

I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!

In his wonderful 2013 book, Smarter Than You Think: How Technology Is Changing Our Minds for the BetterClive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”

Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.

Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! Continue reading →

Guest post from Sasha Moss, R Street Institute (Originally published on TechDirt on 5/24/07)

The U.S. Senate is about to consider mostly pointless legislation that would make the nation’s register of copyrights—the individual who heads the U.S. Copyright Office, officially a part of the Library of Congress—a presidential appointment that would be subject to Senate confirmation.

While the measure has earned praise from some in the content industry, including the Motion Picture Association of America, unless senators can find better ways to modernize our copyright system, they really should just go back to the drawing board.

The Register of Copyrights Selection and Accountability Act of 2017 already cleared the U.S. House in April by a 378-48 margin. Under the bill and its identical Senate companion, the power to select the register would be taken away from Librarian of Congress Dr. Carla Hayden. Instead, the president would select an appointment from among three names put forward by a panel that includes the librarian, the speaker of the House and the majority and minority leaders of both the House and Senate. And the register would now be subject to a 10-year term with the option of multiple reappointments, like the Librarian of Congress.

The legislation is ostensibly the product of the House Judiciary Committee’s multiyear series of roundtables and comments on modernizing the U.S. Copyright Office. In addition to changes to the process of selecting the register, the committee had recommended creating a stakeholder advisory board, a chief economist, a chief technology officer, making information technology upgrades at the office, creating a searchable digital database of ownership information to lower transaction costs in licensing and royalty payments, and creating a small claims court for relatively minor copyright disputes. Continue reading →

[Remarks prepared for Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy & Ethics at Arizona State University, Phoenix, AZ, May 18, 2017.]

_________________

What are we to make of this peculiar new term “permissionless innovation,” which has gained increasing currency in modern technology policy discussions? And how much relevance has this notion had—or should it have—on those conversations about the governance of emerging technologies? That’s what I’d like to discuss here today.

Uncertain Origins, Unclear Definitions

I should begin by noting that while I have written a book with the term in the title, I take no credit for coining the phrase “permissionless innovation,” nor have I been able to determine who the first person was to use the term. The phrase is sometimes attributed to Grace M. Hopper, a computer scientist who was a rear admiral in the United States Navy. She once famously noted that, “It’s easier to ask forgiveness than it is to get permission.”

“Hopper’s Law,” as it has come to be known in engineering circles, is probably the most concise articulation of the general notion of “permissionless innovation” that I’ve ever heard, but Hopper does not appear to have ever used the actual phrase anywhere. Moreover, Hopper was not necessarily applying this notion to the realm of technological governance, but was seemingly speaking more generically about the benefit of trying new things without asking for the blessing of any number of unnamed authorities or overseers—which could include businesses, bosses, teachers, or perhaps even government officials. Continue reading →

Guest post from Joe Kane, R Street Institute

We seldom see a cadre of deceased Founding Fathers petition the Federal Communications Commission, but this past week was an exception. All the big hitters—from George Washington to Benjamin Franklin—filed comments in favor of a free internet. Abraham Lincoln also weighed in from beyond the grave, reprising his threat “to attack with the North” if the commission doesn’t free the internet.

These dead Sons of Liberty likely are pleased that the FCC’s proposed rules take steps to protect innovation and free the internet from excessive regulation. But it shouldn’t surprise us that politicians have strong opinions. What about some figures with a broader perspective?

Jesus weighed in with forceful, if sometimes incomprehensible, views that take both sides on the commission’s Notice of Proposed Rulemaking, which seeks comment on scaling back the FCC’s 2015 decision to subject internet service to the heavy hand of Title II of the Communications Act of 1934. Satan, on the other hand, was characteristically harsher, entreating the commissioners to “rot in Florida.”

Our magical friends across the pond also chimed with some thoughts. Harry Potter, no doubt frustrated with the slow Wi-Fi at Hogwarts, seems strongly in favor of keeping Title II. His compatriot Hermione Granger, however, is more supportive of the current FCC’s efforts to move away from laws designed to regulate a now defunct telephone monopoly, perhaps because she realizes the 2015 rules won’t do much to improve internet service. Dumbledore used his comments to give a favorable evaluation of both Title II and the casting of Jude Law to portray his younger self in an upcoming film.

A few superheroes also deigned to join the discourse. Wonder Woman, Batman and Superman joined a coalition letter which made up with brevity what it lacked in substance. The same can’t be said for the FCC’s notice itself, which contains dozens of pages of analysis and seeks comments on many substantive suggestions designed to reduce regulatory burdens on infrastructure investment and the next generation of real time, internet-based services. Another, more diverse, coalition letter was joined by Morgan Freeman, Pepe the Frog, a “Mr. Dank Memes” and the Marvel villain (and Norse trickster god) Loki. It contained a transcript of Jerry Seinfeld’s Bee Movie.

Speaking of villains, Josef Stalin made known his preference that no rules be changed. But Adolf Hitler attacked Stalin’s position like it was 1941.

Then there are those with advanced degrees. Doctor Bigfoot and Doctor Who filed separate comments in support of net neutrality.

In a debate too often characterized by shrill and misleading rhetoric, it’s heartening to see the FCC’s comment process is engaging such lofty figures to substantively inform the policymaking process. I mean, it sure would be a shame if taxpayer money supporting the mandatory review of the 1,500,000+ comments in this proceeding was wasted on fake responses.

This post was originally posted at the R Street blog.

[originally posted on Medium]

Today is the anniversary of the day the machines took over.

Exactly twenty years ago today, on May 11, 1997, the great chess grandmaster Garry Kasparov became the first chess world champion to lose a match to a supercomputer. His battle with IBM’s “Deep Blue” was a highly-publicized media spectacle, and when he lost Game 6 of his match against the machine, it shocked the world.

At the time, Kasparov was bitter about the loss and even expressed suspicions about how Deep Blue’s team of human programmers and chess consultants might have tipped the match in favor of machine over man. Although he still wonders about how things went down behind the scenes during the match, Kasparov is no longer as sore as he once was about losing to Deep Blue. Instead, Kasparov has built on his experience that fateful week in 1997 and learned how he and others can benefit from it.

The result of this evolution in his thinking is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, a book which serves as a paean to human resiliency and our collective ability as a species to adapt in the face of technological disruption, no matter how turbulent.

Kasparov’s book serves as the perfect antidote to the prevailing gloom-and-doom narrative in modern writing about artificial intelligence (AI) and smart machines. His message is one of hope and rational optimism about future in which we won’t be racing against the machines but rather running alongside them and benefiting in the process.

Overcoming the Technopanic Mentality

There is certainly no shortage of books and articles being written today about AI, robotics, and intelligent machines. The tone of most of these tracts is extraordinarily pessimistic. Each page is usually dripping with dystopian dread and decrying a future in which humanity is essentially doomed.

As I noted in a recent essay about “The Growing AI Technopanic,” after reading through most of these books and articles, one is left to believe that in the future: “Either nefarious-minded robots enslave us or kill us, or AI systems treacherously trick us, or at a minimum turn our brains to mush.” These pessimistic perspectives are clearly on display within the realm of fiction, where every sci-fi book, movie, or TV show depicts humanity as certain losers in the proverbial “race” against machines. But such lugubrious lamentations are equally prevalent within the pages of many non-fiction books, academic papers, editorials, and journalistic articles.

Given the predominantly panicky narrative surrounding the age of smart machines, Kasparov’s Deep Thinking serves as a welcome breath of fresh air. The aim of his book is finding ways of “doing a smarter job of humans and machines working together” to improve well-being. Continue reading →

By Jordan Reimschisel & Adam Thierer

[Originally published on Medium on May 2, 2017.]

Americans have schizophrenic opinions about artificial intelligence (AI) technologies. Ask the average American what they think of AI and they will often respond with a combination of fear, loathing, and dread. Yet, the very same AI applications they claim to be so anxious about are already benefiting their lives in profound ways.

Last week, we posted complementary essays about the growing “technopanic” over artificial intelligence and the potential for that panic to undermine many important life-enriching medical innovations or healthcare-related applications. We were inspired to write those essays after reading the results of a recent poll conducted by Morning Consult, which suggested that the public was very uncomfortable with AI technologies. “A large majority of both Republicans and Democrats believe there should be national and international regulations on artificial intelligence,” the poll found, Of the 2,200 American adults surveyed, the poll revealed that “73 percent of Democrats said there should be U.S. regulations on artificial intelligence, as did 74 percent of Republicans and 65 percent of independents.”

We noted that there were reasons to question the significance of those in light of the binary way in which the questions were asked. Nonetheless, there are clearly some serious concerns among the public about AI and robotics. You see that when you read deeper into the poll results for specific questions and find respondents saying that they are “somewhat” to “very uncomfortable” about a wide range of specific AI applications.

Yet, in each case, Americans are already deriving significant benefits from each of the AI applications they claim to be so uncomfortable with.

Continue reading →

There is reporting suggesting that the Trump FCC may move to eliminate the FCC’s complex Title II regulations for the Internet and restore the FTC’s ability to police anticompetitve and deceptive practices online. This is obviously welcome news. These reports also suggest that FCC Chairman Pai and the FTC will require ISPs add open Internet principles to their terms of service, that is, no unreasonable blocking or throttling of content and no paid priority. These principles have always been imprecise because federal law allows ISPs to block objectionable content if they wish (like pornography or violent websites) and because ISPs have a First Amendment right to curate their services.

Whatever the exact wording, there shouldn’t be a per se ban of paid priority. Whatever policy develops should limit anticompetitive paid priority, not all paid priority. Paid prioritization is simply a form of consideration payment, which is economists’ term for when upstream producers pay downstream retailers or distributors for special treatment. There’s economics literature on consideration payments and it’s an accepted business practice in many other industries. Further, consideration payments often benefit small providers and niche customers. Some small and large companies with interactive IP services might be willing to pay for end-to-end service reliability.

The Open Internet Order’s paid priority ban has always been short sighted because it attempts to preserve the Internet as it existed circa 2002. It resembles the FCC’s unfounded insistence for decades that subscription TV (ie, how the vast majority of Americans consume TV today) was against “the public interest.” Like the defunct subscription TV ban, the paid priority ban is an economics-free policy that will hinder new services. 

Despite what late-night talk show hosts might say, “fast lanes” on the Internet are here and will continue. “Fast lanes” have always been permitted because, as Obama’s US CTO Aneesh Chopra noted, some emerging IP services need special treatment. Priority transmission was built into Internet protocols years ago and the OIO doesn’t ban data prioritization; it bans BIAS providers from charging “edge providers” a fee for priority.

The notion that there’s a level playing field online needing preservation is a fantasy. Non-real-time services like Netflix streaming, YouTube, Facebook pages, and major websites can mostly be “cached” on servers scattered around the US. Major web companies have their own form of paid prioritization–they spend millions annually, including large payments to ISPs, on transit agreements, CDNs, and interconnection in order to avoid congested Internet links.

The problem with a blanket paid priority ban is that it biases the evolution of the Internet in favor of these cache-able services and against real-time or interactive services like teleconferencing, live TV, and gaming. Caching doesn’t work for these services because there’s nothing to cache beforehand. 

When would paid prioritization make sense? Most likely a specialized service for dedicated users that requires end-to-end reliability. 

I’ll use a plausible example to illustrate the benefits of consideration payments online–a telepresence service for deaf people. As Martin Geddes described, a decade ago the government in Wales developed such a service. The service architects discovered that a well-functioning service had quality characteristics not supplied by ISPs. ISPs and video chat apps like Skype optimize their networks, video codecs, and services for non-deaf people (ie, most customers) and prioritize consistent audio quality over video quality. While that’s useful for most people, deaf people need basically the opposite optimization because they need to perceive subtle hand and finger motions. The typical app that prioritizes audio, not video, doesn’t work for them.

But high-def real-time video quality requires upstream and downstream capacity reservation and end-to-end reliability. This is not cheap to provide. An ISP, in this illustration, has three options–charge the telepresence provider, charge deaf customers a premium, or spread the costs across all customers. The paid priority ban means ISPs can only charge customers for increased costs. This paid priority ban unnecessarily limits the potential for such services since there may be companies or nonprofits willing to subsidize such a service.

It’s a specialized example but illustrates the idiosyncratic technical requirements needed for many real-time services. In fact, real-time services are the next big challenge in the Internet’s evolution. As streaming media expert Dan Rayburn noted, “traditional one-way live streaming is being disrupted by the demand for interactive engagement.”  Large and small edge companies are increasingly looking for low-latency video solutions. Today, a typical “live” event is broadcast online to viewers with a 15- to 45-second delay. This latency limits or kills the potential for interactive online streaming services like online talk shows, pet cams, online auctions, videogaming, and online classrooms.

If the FTC takes back oversight of ISPs and the Internet it should, as with any industry, permit any business practice that complies with competition law and consumer protection law. The agency should disregard the unfounded belief that consideration payments online (“paid priority”) are always harmful.