By Brent Skorup and Melody Calkins

Recently, the FCC sought comments for its Media Modernization Initiative in its effort to “eliminate or modify [media] regulations that are outdated, unnecessary, or unduly burdensome.” The regulatory thicket for TV distribution has long encumbered broadcast and cable providers. These rules encourage large, homogeneous cable TV bundles and burden cable and satellite operators with high compliance costs. (See the complex web of TV regulations at the Media Metrics website.)

One reason “skinny bundles” from online video providers and cable operators are attracting consumers is that online video circumvents the FCC’s Rube Goldberg-like system altogether. The FCC should end its 50-year experiment with TV regulation, which, among other things, has raised the cost of TV and degraded the First Amendment rights of media outlets.

The proposal to eliminate legacy media rules garnered a considerable amount of support from a wide range of commenters. In our filed reply comments, we identify four regulatory rules ripe for removal:

  • News distortion. This uncodified, under-the-radar rule allows the commission to revoke a broadcasters’ license if the FCC finds that a broadcaster deliberately engages in “news distortion, staging, or slanting.” The rule traces back to the FCC’s longstanding position that it can revoke licenses from broadcast stations if programming is not “in the public interest.”

    Though uncodified and not strictly enforced, the rule was reiterated in the FCC’s 2008 broadcast guidelines. The outline of the rule was laid out in the 1998 case Serafyn v. CBS, involving a complaint by a Ukrainian-American who alleged that the “60 Minutes” news program had unfairly edited interviews to portray Ukrainians as backwards and anti-Semitic. The FCC dismissed the complaint but DC Circuit Court reversed that dismissal and required FCC intervention. (CBS settled and the complaint was dropped before the FCC could intervene.)

    “Slanted” and distorted news can be found in (unregulated) cable news, newspapers, Twitter, and YouTube. The news distortion rule should be repealed and broadcasters should have regulatory parity (and their full First Amendment rights) restored.
  • Must-carry. The rule requires cable operators to distribute the programming of local broadcast stations at broadcasters’ request. (Stations carrying relatively low-value broadcast networks seek carriage via must-carry. Stations carrying popular networks like CBS and NBC can negotiate payment from cable operators via “retransmission consent” agreements.) Must-carry was narrowly sustained by the Supreme Court in 1994 against a First Amendment challenge, on the grounds that cable operators had monopoly power in the pay-TV market. Since then, however, cable’s market share shrank from 95% to 53%. Broadcast stations have far more options for distribution, including satellite TV, telco TV, and online distribution and it’s unlikely the rules would survive a First Amendment challenge today.
  • Network nonduplication and syndicated exclusivity. These rules limit how and when broadcast programming can be distributed and allow the FCC to intervene if a cable operator breaches a contract with a broadcast station. But the (exempted) distribution of hundreds of non-broadcast channels (e.g., CNN, MTV, ESPN) show that programmers and distributors are fully capable of forming private negotiations without FCC oversight. These rules simply make licensing negotiations more difficult and invite FCC intervention.

Finally, we identify retransmission consent regulations and compulsory licenses for repeal. Because “retrans” interacts with copyright matters outside of the FCC’s jurisdiction, we encourage the FCC work with the Copyright Office in advising Congress to repeal these statutes. Cable operators dislike the retrans framework and broadcasters dislike being compelled to license programming at regulated rates. These interventions simply aren’t needed (hundreds of cable and online-only TV channels operate outside of this framework) and neither the FCC nor the Copyright Office particularly likes being the referees in these fights. The FCC should break the stalemate and approach the Copyright Office about advocating for direct licensing of broadcast TV content.

My professional life is dedicated to researching the public policy implications of various emerging technologies. Of the many issues and sectors that I cover, none are more interesting or important than advanced medical innovation. After all, new health care technologies offer the greatest hope for improving human welfare and longevity. Consequently, the public policies that govern these technologies and sectors will have an important bearing on just how much life-enriching or life-saving medical innovation we actually get going forward.

Few people are doing better reporting on the intersection of advanced technology and medicine — as well as the effects of regulation on those fields — than my Mercatus Center colleague Jordan Reimschisel. In a very short period of time, Jordan has completely immersed himself in these complex, cutting-edge topics and produced a remarkable body of work discussing how, in his words, “technology can merge with medicine to democratize medical decision making, empower patients to participate in the treatment process, and promote better health outcomes for more patients at lower and lower costs.” He gets deep into the weeds of the various technologies he writes about as well as the legal, ethical, and economic issues surrounding each topic.

I encouraged him to start an ongoing compendium of his work on these topics so that we could continue to highlight his research, some of which I have been honored to co-author with him. I have listed his current catalog down below, but jump over to this Medium page he set up and bookmark it for future reference. This is some truly outstanding work and I am excited to see where he goes next with topics as wide-ranging as “biohackerspaces,” democratized or “personalized” medicine, advanced genetic testing and editing techniques, and the future of the FDA in an age of rapid change.

Give Jordan a follow on Twitter (@jtreimschisel) and make sure to follow his Medium page for his dispatches from the front lines of the debate over advanced medical innovation and its regulation.

Continue reading →

“First electricity, now telephones. Sometimes I feel as if I were living in an H.G. Wells novel.” –Dowager Countess, Downton Abbey

Every technology we take for granted was once new, different, disruptive, and often ridiculed and resisted as a result. Electricity, telephones, trains, and television all caused widespread fears once in the way robots, artificial intelligence, and the internet of things do today. Typically it is realized by most that these fears are misplaced and overly pessimistic, the technology gets diffused and we can barely remember our life without it. But in the recent technopanics, there has been a concern that the legal system is not properly equipped to handle the possible harms or concerns from these new technologies. As a result, there are often calls to regulate or rein in their use.

In the late 1980s, video cassette recorders (VCRs) caused a legal technopanic. The concerns were less that VCRs would lead to some bizarre human mutation as in many technopanics, but rather that the existing system of copyright infringement and vicarious liability could not adequately address the potential harm to the motion picture industry. The then president of the Motion Picture Association of America Jack Valenti famously told Congress, “I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone.”

Continue reading →

If the techno-pessimists are right and robots are set to take all the jobs, shouldn’t employment in Amazon warehouses be plummeting right now? After all, Amazon’s sorting and fulfillment centers have been automated at a rapid pace, with robotic technologies now being integrated into almost every facet of the process. (Just watch the video below to see it all in action.)

And yet according to this Wall Street Journal story by Laura Stevens, Amazon is looking to immediately fill 50,000 new jobs, which would mean that its U.S. workforce “would swell to around 300,000, compared with 30,000 in 2011.”  According to the article, “Nearly 40,000 of the promised jobs are full-time at the company’s fulfillment centers, including some facilities that will open in the coming months. Most of the remainder are part-time positions available at Amazon’s more than 30 sorting centers.”

How can this be? Shouldn’t the robots have eaten all those jobs by now?

Continue reading →

“Responsible research and innovation,” or “RRI,” has become a major theme in academic writing and conferences about the governance of emerging technologies. RRI might be considered just another variant of corporate social responsibility (CSR), and it indeed borrows from that heritage. What makes RRI unique, however, is that it is more squarely focused on mitigating the potential risks that could be associated with various technologies or technological processes. RRI is particularly concerned with “baking-in” certain values and design choices into the product lifecycle before new technologies are released into the wild.

In this essay, I want to consider how RRI lines up with the opposing technological governance regimes of “permissionless innovation” and the “precautionary principle.” More specifically, I want to address the question of whether “permissionless innovation” and “responsible innovation” are even compatible. While participating in recent university seminars and other tech policy events, I have encountered a certain degree of skepticism—and sometimes outright hostility—after suggesting that, properly understood, “permissionless innovation” and “responsible innovation” are not warring concepts and that RRI can co-exist peacefully with a legal regime that adopts permissionless innovation as its general tech policy default. Indeed, the application of RRI lessons and recommendations can strengthen the case for adopting a more “permissionless” approach to innovation policy in the United States and elsewhere. Continue reading →

It’s becoming clearer why, for six years out of eight, Obama’s appointed FCC chairmen resisted regulating the Internet with Title II of the 1934 Communications Act. Chairman Wheeler famously did not want to go that legal route. It was only after President Obama and the White House called on the FCC in late 2014 to use Title II that Chairman Wheeler relented. If anything, the hastily-drafted 2015 Open Internet rules provide a new incentive to ISPs to curate the Internet in ways they didn’t want to before. 

The 2016 court decision upholding the rules was a Pyrrhic victory for the net neutrality movement. In short, the decision revealed that the 2015 Open Internet Order provides no meaningful net neutrality protections–it allows ISPs to block and throttle content. As the judges who upheld the Order said, “The Order…specifies that an ISP remains ‘free to offer ‘edited’ services’ without becoming subject to the rule’s requirements.” 

The 2014 White House pressure didn’t occur in a vacuum. It occurred immediately after Democratic losses in the November 2014 midterms. As Public Knowledge president Gene Kimmelman tells it, President Obama needed to give progressives “a clean victory for us to show that we are standing up for our principles.” The slapdash legal finessing that followed was presaged by President Obama’s November 2014 national address urging Title II classification of the Internet, which cites the wrong communications law on the Obama White House website to this day.

The FCC staff did their best with what they were given but the resulting Order was aimed at political symbolism and acquiring jurisdiction to regulate the Internet, not meaningful “net neutrality” protections. As internal FCC emails produced in a Senate majority report show, Wheeler’s reversal that week caught the non-partisan career FCC staff off guard. Literally overnight FCC staff had to scrap the “hybrid” (non-Title II) order they’d been carefully drafting for weeks and scrape together a legal justification for using Title II. This meant calling in advocates to enhance the record and dubious citations to the economics literature. Former FCC chief economist, Prof. Michael Katz, whose work was cited in the Order, later stated to Forbes that he suspected the “FCC cited my papers as an inside joke, because they know how much I think net neutrality is a bad idea.” 

Applying 1934 telegraph and telephone laws to the Internet was always going to have unintended consequences, but the politically-driven Order increasingly looks like an own-goal, even to supporters. Former FCC chief technologist, Jon Peha, who supports Title II classification of ISPs almost immediately raised the alarm that the Order offered “massive loopholes” to ISPs that could make the rules irrelevant. This was made clear when the FCC attorney defending the Order in court acknowledged that ISPs are free to block and filter content and escape the Open Internet regulations and Title II. These concessions from the FCC surprised even AT&T VP Hank Hultquist:

Wow. ISPs are not only free to engage in content-based blocking, they can even create the long-dreaded fast and slow lanes so long as they make their intentions sufficiently clear to customers.

So the Open Internet Order not only permits the net neutrality “nightmare scenario,” it provides an incentive to ISPs to curate the Internet. Despite the activist PR surrounding the Order, so-called “fast lanes”–like carrier-provided VoIP, VoLTE, and IPTV–have existed for years and the FCC rules allow them.  The Order permits ISP blocking, throttling, and “fast lanes”–what remains of “net neutrality”?

Prof. Susan Crawford presciently warned in 2005: 

I have lost faith in our ability to write about code in words, and I’m confident that any attempt at writing down network neutrality will be so qualified, gutted, eviscerated, and emptied that it will end up being worse than useless.

Aside from some religious ISPs, ISPs don’t want to filter Internet content. But the Obama FCC, via the “net neutrality” rules, gives them a new incentive: the Order deregulates ISPs that filter. ISPs will fight the rules because they want to continue to offer their conventional Internet service without submitting to the Title II baggage. This is why ISPs favor scrapping the Order–not only is it the FCC’s first claim to regulate Internet access, if the rules are not repealed, ISPs will be compelled to make difficult decisions about their business models and technologies in the future.

Whatever you want to call them–autonomous vehicles, driverless cars, automated systems, unmanned systems, connected cars, piloteless vehicles, etc.–the life-saving potential of this new class of technologies has been shown to be potentially enormous. I’ve spent a lot of time researching and writing about these issues, and I have yet to see any study forecast the opposite (i.e., a net loss of lives due to these technologies.) While the estimated life savings vary, the numbers are uniformly positive across the board, and not just in terms of lives saved, but also for reductions in other injuries, property damage, and aggregate social costs associated with vehicular accidents more generally.

To highlight these important and consistent findings, I asked my research assistant Melody Calkins to help me compile a list of recent studies on this issue and summarize the key takeaways of each one regarding at least the potential for lives saved. The studies and findings are listed below in reverse chronological order of publication. I may try to add to this over time, so please feel free to shoot me suggested updates as they become available.

Needless to say, these findings would hopefully have some bearing on public policy toward these technologies. Namely, we should be taking steps to accelerate this transition and removing roadblocks to the driverless car revolution because we could be talking about the biggest public health success story of our lifetime if we get policy right here. Every day matters because each day we delay this transition is another day during which 90 people die in car crashes and more than 6,500 will be injured. And sadly, those numbers are going up, not down. According to the National Highway Traffic Safety Administration (NHTSA), auto crashes and the roadway death toll is climbing for the first time in decades. Meanwhile, the agency estimated that 94 percent of all crashes are attributable to human error. We have the potential to do something about this tragedy, but we have to get public policy right. Delay is not an option.

Continue reading →

By Brent Skorup and Melody Calkins

Tech-optimists predict that drones and small aircraft may soon crowd US skies. An FAA administrator predicted that by 2020 tens of thousands of drones would be in US airspace at any one time. Further, over a dozen companies, including Uber, are building vertical takeoff and landing (VTOL) aircraft that could one day shuttle people point-to-point in urban areas. Today, low-altitude airspace use is episodic (helicopters, ultralights, drones) and with such light use, the low-altitude airspace is shared on an ad hoc basis with little air traffic management. Coordinating thousands of aircraft in low-altitude flight, however, demands a new regulatory framework.

Why not auction off low-altitude airspace for exclusive use?

There are two basic paradigms for resource use: open access and exclusive ownership. Most high-altitude airspace is lightly used and the open access regime works tolerably well because there are a small number of players (airline operators and the government) and fixed routes. Similarly, Class G airspace—which varies by geography but is generally the airspace from the surface to 700 feet above ground—is uncontrolled and virtually open access.

Valuable resources vary immensely in their character–taxi medallions, real estate, radio spectrum, intellectual property, water–and a resource use paradigm, once selected requires iteration and modification to ensure productive use. “The trick,” Prof. Richard Epstein notes, “is to pick the right initial point to reduce the stress on making these further adjustments.” If indeed dozens of operators will be vying for variable drone and VTOL routes in hundreds of local markets, exclusive use models could create more social benefits and output than open access and regulatory management. NASA is exploring complex coordination systems in this airspace but, rather than agency permissions, lawmakers should consider using property rights and the price mechanism.

The initial allocation of airspace could be determined by auction. An agency, probably the FAA, would:

  1. Identify and define geographic parcels of Class G airspace;
  2. Auction off the parcels to any party (private corporations, local governments, non-commercial stakeholders, or individual users) for a term of years with an expectation of renewal; and
  3. Permit the sale, combination, and subleasing of those parcels

The likely alternative scenario—regulatory allocation and management of airspace–derives from historical precedent in aviation and spectrum policy:

  1. First movers and the politically powerful acquire de facto control of low-altitude airspace,
  2. Incumbents and regulators exclude and inhibit newcomers and innovators,
  3. The rent-seeking and resource waste becomes unendurable for lawmakers, and
  4. Market-based reforms are slowly and haphazardly introduced.

For instance, after demand for commercial flights took off in the 1960s, a command-and-control quota system was created for crowded Northeast airports. Takeoff and landing rights, called “slots,” were assigned to early airlines but regulators did not allow airlines to sell those rights. The anticompetitive concentration and hoarding of airport slots at terminals is still being slowly unraveled by Congress and the FAA to this day. There’s a similar story for government assignment of spectrum over decades, as explained in Thomas Hazlett’s excellent new book, The Political Spectrum.

The benefit of an auction, plus secondary markets, is that the resource is generally put to its highest-valued use. Secondary markets and subleasing also permit latecomers and innovators to gain resource access despite lacking an initial assignment and political power. Further, exclusive use rights would also provide VTOL operators (and passengers) the added assurance that routes would be “clear” of potential collisions. (A more regulatory regime might provide that assurance but likely via complex restrictions on airspace use.) Airspace rights would be a new cost for operators but exclusive use means operators can economize on complex sensors, other safety devices, and lobbying costs. Operators would also possess an asset to sublease and monetize.

Another bonus (from the government’s point of view) is that the sale of Class G airspace can provide government revenue. Revenue would be slight at first but could prove lucrative once there’s substantial commercial interest. The Federal government, for instance, auctions off its usage rights for grazing, oil and gas retrieval, radio spectrum, mineral extraction, and timber harvesting. Spectrum auctions alone have raised over $100 billion for the Treasury since they began in 1994.

[originally published on Plaintext on June 21, 2017.]

This summer, we celebrate the 20th anniversary of two developments that gave us the modern Internet as we know it. One was a court case that guaranteed online speech would flow freely, without government prior restraints or censorship threats. The other was an official White House framework for digital markets that ensured the free movement of goods and services online.

The result of these two vital policy decisions was an unprecedented explosion of speech freedoms and commercial opportunities that we continue to enjoy the benefits of twenty years later.

While it is easy to take all this for granted today, it is worth remembering that, in the long arc of human history, no technology or medium has more rapidly expanded the range of human liberties — both speech and commercial liberties — than the Internet and digital technologies. But things could have turned out much differently if not for the crucially important policy choices the United States made for the Internet two decades ago. Continue reading →

I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!

In his wonderful 2013 book, Smarter Than You Think: How Technology Is Changing Our Minds for the BetterClive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”

Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.

Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! Continue reading →