Congress is poised to act on “driverless car” legislation that might help us achieve one of the greatest public health success stories of our lifetime by bringing down the staggering costs associated with car crashes.

The SELF DRIVE Act currently awaiting a vote in the House of Representatives would pre-empt the existing state laws concerning driverless cars and replace these state laws with a federal standard. The law would formalize the existing NHTSA standards for driverless cars and establish their role as the regulator of the design, construction, and performance of this technology. The states would become regulators for driverless cars and its technology in the same way as they are for current driver operated motor vehicles.

It is important we get policy right on this front because motor vehicle accidents result in over 35,000 deaths and over 2 million injuries each year. These numbers continue to rise as more people hit the roads due to lower gas prices and as more distractions while driving emerge. The National Highway Traffic Safety Administration (NHTSA) estimates 94 percent of these crashes are caused by driver error.

Driverless cars provide a potential solution to this tragedy. One study estimated that widespread adoption of such technology would avoid about 28 percent of all motor vehicle accidents and prevent nearly 10,000 deaths each year. This lifesaving technology may be generally available sooner than expected if innovators are allowed to freely develop it.

Continue reading →

Are you interested in emerging technologies and the public policy issues surrounding them? Then come to work with me at the Mercatus Center at George Mason University!

The Mercatus Center is currently looking to hire a new Senior Research Fellow in our Technology Policy Program. Our tech policy team covers a large and growing array of cutting-edge issues, including: robotics, AI, and autonomous vehicles; commercial drones; the Internet of Things; virtual reality; cryptocurrencies; the Sharing Economy; 3D printing; and advanced medical and health technologies, just to name a few current priorities.

But the most exciting—but challenging—thing about covering tech policy is that the landscape of issues and concerns is always morphing and growing. Our new Senior Research Fellow will help our team determine our tech policy priorities going forward and then be responsible for engaging in scholarly work and public speaking on those topics.

All the finer details about this new position are listed on the Mercatus website. If you’re interested and qualified, please apply! Or, if you know of others who might be interested in this position, please forward this notice along to them.

On August 1, Sens. Mark Warner and Cory Gardner introduced the “Internet of Things Cybersecurity Improvement Act of 2017.” The goal of the legislation according to its sponsors is to establish “minimum security requirements for federal procurements of connected devices.” Pointing to the growing number of connected devices and their use in prior cyber-attacks, the sponsors aims to provide flexible requirements that limit the vulnerabilities of such networks. Most specifically the bill requires all new Internet of Things (IoT) devices to be patchable, free of known vulnerabilities, and rely on standard protocols. Overall the legislation attempts to increase and standardize baseline security of connected devices, while still allowing innovation in the field to remain relatively permissionless. As Ryan Hagemann[1] at the Niskanen Center states, the bill is generally perceived as a step in the right direction in promoting security while limiting the potential harms of regulation to the overall innovation in the Internet of Things.

Continue reading →

The Mercatus Center at George Mason University has just released a new paper on, “Artificial Intelligence and Public Policy,” which I co-authored with Andrea Castillo O’Sullivan and Raymond Russell. This 54-page paper can be downloaded via the Mercatus website, SSRN, or ResearchGate. Here is the abstract:

There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later.

By Brent Skorup and Melody Calkins

Recently, the FCC sought comments for its Media Modernization Initiative in its effort to “eliminate or modify [media] regulations that are outdated, unnecessary, or unduly burdensome.” The regulatory thicket for TV distribution has long encumbered broadcast and cable providers. These rules encourage large, homogeneous cable TV bundles and burden cable and satellite operators with high compliance costs. (See the complex web of TV regulations at the Media Metrics website.)

One reason “skinny bundles” from online video providers and cable operators are attracting consumers is that online video circumvents the FCC’s Rube Goldberg-like system altogether. The FCC should end its 50-year experiment with TV regulation, which, among other things, has raised the cost of TV and degraded the First Amendment rights of media outlets.

The proposal to eliminate legacy media rules garnered a considerable amount of support from a wide range of commenters. In our filed reply comments, we identify four regulatory rules ripe for removal:

  • News distortion. This uncodified, under-the-radar rule allows the commission to revoke a broadcasters’ license if the FCC finds that a broadcaster deliberately engages in “news distortion, staging, or slanting.” The rule traces back to the FCC’s longstanding position that it can revoke licenses from broadcast stations if programming is not “in the public interest.”

    Though uncodified and not strictly enforced, the rule was reiterated in the FCC’s 2008 broadcast guidelines. The outline of the rule was laid out in the 1998 case Serafyn v. CBS, involving a complaint by a Ukrainian-American who alleged that the “60 Minutes” news program had unfairly edited interviews to portray Ukrainians as backwards and anti-Semitic. The FCC dismissed the complaint but DC Circuit Court reversed that dismissal and required FCC intervention. (CBS settled and the complaint was dropped before the FCC could intervene.)

    “Slanted” and distorted news can be found in (unregulated) cable news, newspapers, Twitter, and YouTube. The news distortion rule should be repealed and broadcasters should have regulatory parity (and their full First Amendment rights) restored.
  • Must-carry. The rule requires cable operators to distribute the programming of local broadcast stations at broadcasters’ request. (Stations carrying relatively low-value broadcast networks seek carriage via must-carry. Stations carrying popular networks like CBS and NBC can negotiate payment from cable operators via “retransmission consent” agreements.) Must-carry was narrowly sustained by the Supreme Court in 1994 against a First Amendment challenge, on the grounds that cable operators had monopoly power in the pay-TV market. Since then, however, cable’s market share shrank from 95% to 53%. Broadcast stations have far more options for distribution, including satellite TV, telco TV, and online distribution and it’s unlikely the rules would survive a First Amendment challenge today.
  • Network nonduplication and syndicated exclusivity. These rules limit how and when broadcast programming can be distributed and allow the FCC to intervene if a cable operator breaches a contract with a broadcast station. But the (exempted) distribution of hundreds of non-broadcast channels (e.g., CNN, MTV, ESPN) show that programmers and distributors are fully capable of forming private negotiations without FCC oversight. These rules simply make licensing negotiations more difficult and invite FCC intervention.

Finally, we identify retransmission consent regulations and compulsory licenses for repeal. Because “retrans” interacts with copyright matters outside of the FCC’s jurisdiction, we encourage the FCC work with the Copyright Office in advising Congress to repeal these statutes. Cable operators dislike the retrans framework and broadcasters dislike being compelled to license programming at regulated rates. These interventions simply aren’t needed (hundreds of cable and online-only TV channels operate outside of this framework) and neither the FCC nor the Copyright Office particularly likes being the referees in these fights. The FCC should break the stalemate and approach the Copyright Office about advocating for direct licensing of broadcast TV content.

My professional life is dedicated to researching the public policy implications of various emerging technologies. Of the many issues and sectors that I cover, none are more interesting or important than advanced medical innovation. After all, new health care technologies offer the greatest hope for improving human welfare and longevity. Consequently, the public policies that govern these technologies and sectors will have an important bearing on just how much life-enriching or life-saving medical innovation we actually get going forward.

Few people are doing better reporting on the intersection of advanced technology and medicine — as well as the effects of regulation on those fields — than my Mercatus Center colleague Jordan Reimschisel. In a very short period of time, Jordan has completely immersed himself in these complex, cutting-edge topics and produced a remarkable body of work discussing how, in his words, “technology can merge with medicine to democratize medical decision making, empower patients to participate in the treatment process, and promote better health outcomes for more patients at lower and lower costs.” He gets deep into the weeds of the various technologies he writes about as well as the legal, ethical, and economic issues surrounding each topic.

I encouraged him to start an ongoing compendium of his work on these topics so that we could continue to highlight his research, some of which I have been honored to co-author with him. I have listed his current catalog down below, but jump over to this Medium page he set up and bookmark it for future reference. This is some truly outstanding work and I am excited to see where he goes next with topics as wide-ranging as “biohackerspaces,” democratized or “personalized” medicine, advanced genetic testing and editing techniques, and the future of the FDA in an age of rapid change.

Give Jordan a follow on Twitter (@jtreimschisel) and make sure to follow his Medium page for his dispatches from the front lines of the debate over advanced medical innovation and its regulation.

Continue reading →

“First electricity, now telephones. Sometimes I feel as if I were living in an H.G. Wells novel.” –Dowager Countess, Downton Abbey

Every technology we take for granted was once new, different, disruptive, and often ridiculed and resisted as a result. Electricity, telephones, trains, and television all caused widespread fears once in the way robots, artificial intelligence, and the internet of things do today. Typically it is realized by most that these fears are misplaced and overly pessimistic, the technology gets diffused and we can barely remember our life without it. But in the recent technopanics, there has been a concern that the legal system is not properly equipped to handle the possible harms or concerns from these new technologies. As a result, there are often calls to regulate or rein in their use.

In the late 1980s, video cassette recorders (VCRs) caused a legal technopanic. The concerns were less that VCRs would lead to some bizarre human mutation as in many technopanics, but rather that the existing system of copyright infringement and vicarious liability could not adequately address the potential harm to the motion picture industry. The then president of the Motion Picture Association of America Jack Valenti famously told Congress, “I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone.”

Continue reading →

If the techno-pessimists are right and robots are set to take all the jobs, shouldn’t employment in Amazon warehouses be plummeting right now? After all, Amazon’s sorting and fulfillment centers have been automated at a rapid pace, with robotic technologies now being integrated into almost every facet of the process. (Just watch the video below to see it all in action.)

And yet according to this Wall Street Journal story by Laura Stevens, Amazon is looking to immediately fill 50,000 new jobs, which would mean that its U.S. workforce “would swell to around 300,000, compared with 30,000 in 2011.”  According to the article, “Nearly 40,000 of the promised jobs are full-time at the company’s fulfillment centers, including some facilities that will open in the coming months. Most of the remainder are part-time positions available at Amazon’s more than 30 sorting centers.”

How can this be? Shouldn’t the robots have eaten all those jobs by now?

Continue reading →

“Responsible research and innovation,” or “RRI,” has become a major theme in academic writing and conferences about the governance of emerging technologies. RRI might be considered just another variant of corporate social responsibility (CSR), and it indeed borrows from that heritage. What makes RRI unique, however, is that it is more squarely focused on mitigating the potential risks that could be associated with various technologies or technological processes. RRI is particularly concerned with “baking-in” certain values and design choices into the product lifecycle before new technologies are released into the wild.

In this essay, I want to consider how RRI lines up with the opposing technological governance regimes of “permissionless innovation” and the “precautionary principle.” More specifically, I want to address the question of whether “permissionless innovation” and “responsible innovation” are even compatible. While participating in recent university seminars and other tech policy events, I have encountered a certain degree of skepticism—and sometimes outright hostility—after suggesting that, properly understood, “permissionless innovation” and “responsible innovation” are not warring concepts and that RRI can co-exist peacefully with a legal regime that adopts permissionless innovation as its general tech policy default. Indeed, the application of RRI lessons and recommendations can strengthen the case for adopting a more “permissionless” approach to innovation policy in the United States and elsewhere. Continue reading →

It’s becoming clearer why, for six years out of eight, Obama’s appointed FCC chairmen resisted regulating the Internet with Title II of the 1934 Communications Act. Chairman Wheeler famously did not want to go that legal route. It was only after President Obama and the White House called on the FCC in late 2014 to use Title II that Chairman Wheeler relented. If anything, the hastily-drafted 2015 Open Internet rules provide a new incentive to ISPs to curate the Internet in ways they didn’t want to before. 

The 2016 court decision upholding the rules was a Pyrrhic victory for the net neutrality movement. In short, the decision revealed that the 2015 Open Internet Order provides no meaningful net neutrality protections–it allows ISPs to block and throttle content. As the judges who upheld the Order said, “The Order…specifies that an ISP remains ‘free to offer ‘edited’ services’ without becoming subject to the rule’s requirements.” 

The 2014 White House pressure didn’t occur in a vacuum. It occurred immediately after Democratic losses in the November 2014 midterms. As Public Knowledge president Gene Kimmelman tells it, President Obama needed to give progressives “a clean victory for us to show that we are standing up for our principles.” The slapdash legal finessing that followed was presaged by President Obama’s November 2014 national address urging Title II classification of the Internet, which cites the wrong communications law on the Obama White House website to this day.

The FCC staff did their best with what they were given but the resulting Order was aimed at political symbolism and acquiring jurisdiction to regulate the Internet, not meaningful “net neutrality” protections. As internal FCC emails produced in a Senate majority report show, Wheeler’s reversal that week caught the non-partisan career FCC staff off guard. Literally overnight FCC staff had to scrap the “hybrid” (non-Title II) order they’d been carefully drafting for weeks and scrape together a legal justification for using Title II. This meant calling in advocates to enhance the record and dubious citations to the economics literature. Former FCC chief economist, Prof. Michael Katz, whose work was cited in the Order, later stated to Forbes that he suspected the “FCC cited my papers as an inside joke, because they know how much I think net neutrality is a bad idea.” 

Applying 1934 telegraph and telephone laws to the Internet was always going to have unintended consequences, but the politically-driven Order increasingly looks like an own-goal, even to supporters. Former FCC chief technologist, Jon Peha, who supports Title II classification of ISPs almost immediately raised the alarm that the Order offered “massive loopholes” to ISPs that could make the rules irrelevant. This was made clear when the FCC attorney defending the Order in court acknowledged that ISPs are free to block and filter content and escape the Open Internet regulations and Title II. These concessions from the FCC surprised even AT&T VP Hank Hultquist:

Wow. ISPs are not only free to engage in content-based blocking, they can even create the long-dreaded fast and slow lanes so long as they make their intentions sufficiently clear to customers.

So the Open Internet Order not only permits the net neutrality “nightmare scenario,” it provides an incentive to ISPs to curate the Internet. Despite the activist PR surrounding the Order, so-called “fast lanes”–like carrier-provided VoIP, VoLTE, and IPTV–have existed for years and the FCC rules allow them.  The Order permits ISP blocking, throttling, and “fast lanes”–what remains of “net neutrality”?

Prof. Susan Crawford presciently warned in 2005: 

I have lost faith in our ability to write about code in words, and I’m confident that any attempt at writing down network neutrality will be so qualified, gutted, eviscerated, and emptied that it will end up being worse than useless.

Aside from some religious ISPs, ISPs don’t want to filter Internet content. But the Obama FCC, via the “net neutrality” rules, gives them a new incentive: the Order deregulates ISPs that filter. ISPs will fight the rules because they want to continue to offer their conventional Internet service without submitting to the Title II baggage. This is why ISPs favor scrapping the Order–not only is it the FCC’s first claim to regulate Internet access, if the rules are not repealed, ISPs will be compelled to make difficult decisions about their business models and technologies in the future.