Internet regulation advocates are trying to turn a recent FCC Notice of Inquiry about the state of US telecommunications services into a controversy. Twelve US Senators have accused the FCC of wanting to “redefin[e] broadband” in order to “abandon further efforts to connect Americans.”

Considering Chairman Pai and the Commission are already considering actions to accelerate the deployment of broadband, with new proceedings and the formation of the Broadband Deployment Advisory Committee, the allegation that the current NOI is an excuse for inaction is perplexing.

The true “controversy” is much more mundane–reasonable people disagree about what congressional neologisms like “advanced telecommunications capability” mean. The FCC must interpret and apply the indeterminate language of Section 706 of the Telecommunications Act, which requires the FCC about whether to determine “whether advanced telecommunications capability is being deployed in a reasonable and timely fashion.” If the answer is negative, the agency must “take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market.” The inquiry is reported in an annual “Broadband Progress Report.” Much of the “scandal” of this proceeding is confusion about what “broadband” means.

What is broadband?

First: what qualifies as “broadband” download speed? It depends.

The OECD says anything above 256 kbps.

ITU standards set it at above 1.5 Mbps (or is 2.0 Mbps?).

In the US, broadband is generally defined as a higher speed. The USDA’s Rural Utilities Service defines it as 4.0 Mbps.

The FCC’s 2015 Broadband Progress Report found, as Obama FCC officials put it, that “the FCC’s definition of broadband” is now 25 Mbps. This is why advocates insist “broadband access” includes only wireline services above 25 Mbps.

But in the same month, the Obama FCC determined in the Open Internet Order that anything above dialup speed–56 kbps–is “broadband Internet access service.”

So, according to regulation advocates, 1.5 Mbps DSL service isn’t “broadband access” service but it is “broadband Internet access service.” Likewise a 30 Mbps 4G LTE connection isn’t a “broadband access” service but it is “broadband Internet access service.”

In other words, the word games about “broadband” are not coming from the Trump FCC. There is no consistency for what “broadband” means because prior FCCs kept changing the definition, and even use the term differently in different proceedings. As the Obama FCC said in 2009, “In previous reports to Congress, the Commission used the terms ‘broadband,’ ‘advanced telecommunications capability,’ and ‘advanced services’ interchangeably.”

Instead, what is going on is that the Trump FCC is trying to apply Section 706 to the current broadband market. The main questions are, what is advanced telecommunications capability, and is it “being deployed in a reasonable and timely fashion”?

Is mobile broadband an “advanced telecommunications capability”?

Previous FCCs declined to adopt a speed benchmark for when wireless service satisfies the “advanced telecommunications capability” definition. The so-called controversy is because the latest NOI revisits this omission in light of consumer trends. The NOI straightforwardly asks whether mobile broadband above 10 Mbps satisfies the statutory definition of “advanced telecommunications capability.”

For that, the FCC must consult the statute. Such a capability, the statute says, is technology-neutral (i.e. includes wireless and “fixed” connections) and “enables users to originate and receive high-quality voice, data, graphics, and video telecommunications.”

Historically, since the statute doesn’t provide much precision, the FCC has examined subscription rates of various broadband speeds and services. From 2010 to 2015, the Obama FCCs defined advanced telecommunications capability as a fixed connection of 4 Mbps. In 2015, as mentioned, that benchmark was raised 25 Mbps.

Regulation advocates fear that if the FCC looks at subscription rates, the agency might find that mobile broadband above 10 Mbps is an advanced telecommunications capability. This finding, they feel, would undermine the argument that the US broadband market needs intense regulation. According to recent Pew surveys, 12% of adults–about 28 million people–are “wireless only” and don’t have a wireline subscription. Those numbers certainly raise the possibility that mobile broadband is an advanced telecommunications capability.

Let’s look at the three fixed broadband technologies that “pass” the vast majority of households–cable modem, DSL, and satellite–and narrow the data to connections 10 Mbps or above.*

Home broadband connections (10 Mbps+)
Cable modem – 54.4 million
DSL – 11.8 million
Satellite – 1.4 million

It’s hard to know for sure since Pew measures adult individuals and the FCC measures households, but it’s possible more people have 4G LTE as home broadband (about 28 million adults and their families) than have 10 Mbps+ DSL as home broadband (11.8 million households).

Subscription rates aren’t the end of the inquiry, but the fact that millions of households are going mobile-only rather than DSL or cable modem is suggestive evidence that mobile broadband offers an advanced telecommunications capability. (Considering T-Mobile is now providing 50 GB of data per line per month, mobile-only household growth will likely accelerate.)

Are high-speed services “being deployed in a reasonable and timely fashion”?

The second inquiry is whether these advanced telecommunications capabilities “are being deployed in a reasonable and timely fashion.” Again, the statute doesn’t give much guidance but consumer adoption of high-speed wireline and wireless broadband has been impressive.

So few people had 25 Mbps for so long that the FCC didn’t record it in its Internet Access Services reports until 2011. At the end of 2011, 6.3 million households subscribed to 25 Mbps. Less than five years later, in June 2016, over 56 million households subscribed. In the last year alone, fixed providers extended 25 Mbps or greater speeds to 21 million households.

The FCC is not completely without guidance on this question. As part of the 2008 Broadband Data Services Improvement Act, Congress instructed the FCC to use international comparisons in its Section 706 Report. International comparisons also suggest that the US is deploying advanced telecommunications capability in a timely manner. For instance, according to the OECD the US has 23.4 fiber and cable modem connections per 100 inhabitants, which far exceeds the OECD average, 16.2 per 100 inhabitants.**

Anyways, the sky is not falling because the FCC is asking about mobile broadband subscription rates. More can be done to accelerate broadband–particularly if the government frees up more spectrum and local governments improve their permitting processes–but the Section 706 inquiry offers little that is controversial or new.

 

*Fiber and fixed wireless connections, 9.6 million and 0.3 million subscribers, respectively, are also noteworthy but these 10 Mbps+ technologies only cover certain areas of the country.

**America’s high rank in the OECD is similar if DSL is included, but the quality of DSL varies widely and often doesn’t provide 10 Mbps or 25 Mbps speeds.

Hurricanes Harvey and Irma mark the first time two Category 4 hurricanes have made U.S. landfall in the same year. Currently the estimates are the two hurricanes have caused between $150 and $200 million in damages.

If there is any positive story within these horrific disasters, it is that these events have seen a renewed sense of community and an outpouring of support from across the nation. From the recent star studded Hand-in-Hand relief concert and JJ Watts Twitter fundraiser to smaller efforts by local marching bands and police departments in faraway states.

What has made these disaster relief efforts different from past hurricanes? These recent efforts have been enabled by technology that was unavailable during past disasters, such as Hurricane Katrina. Continue reading →

Congress is poised to act on “driverless car” legislation that might help us achieve one of the greatest public health success stories of our lifetime by bringing down the staggering costs associated with car crashes.

The SELF DRIVE Act currently awaiting a vote in the House of Representatives would pre-empt the existing state laws concerning driverless cars and replace these state laws with a federal standard. The law would formalize the existing NHTSA standards for driverless cars and establish their role as the regulator of the design, construction, and performance of this technology. The states would become regulators for driverless cars and its technology in the same way as they are for current driver operated motor vehicles.

It is important we get policy right on this front because motor vehicle accidents result in over 35,000 deaths and over 2 million injuries each year. These numbers continue to rise as more people hit the roads due to lower gas prices and as more distractions while driving emerge. The National Highway Traffic Safety Administration (NHTSA) estimates 94 percent of these crashes are caused by driver error.

Driverless cars provide a potential solution to this tragedy. One study estimated that widespread adoption of such technology would avoid about 28 percent of all motor vehicle accidents and prevent nearly 10,000 deaths each year. This lifesaving technology may be generally available sooner than expected if innovators are allowed to freely develop it.

Continue reading →

Are you interested in emerging technologies and the public policy issues surrounding them? Then come to work with me at the Mercatus Center at George Mason University!

The Mercatus Center is currently looking to hire a new Senior Research Fellow in our Technology Policy Program. Our tech policy team covers a large and growing array of cutting-edge issues, including: robotics, AI, and autonomous vehicles; commercial drones; the Internet of Things; virtual reality; cryptocurrencies; the Sharing Economy; 3D printing; and advanced medical and health technologies, just to name a few current priorities.

But the most exciting—but challenging—thing about covering tech policy is that the landscape of issues and concerns is always morphing and growing. Our new Senior Research Fellow will help our team determine our tech policy priorities going forward and then be responsible for engaging in scholarly work and public speaking on those topics.

All the finer details about this new position are listed on the Mercatus website. If you’re interested and qualified, please apply! Or, if you know of others who might be interested in this position, please forward this notice along to them.

On August 1, Sens. Mark Warner and Cory Gardner introduced the “Internet of Things Cybersecurity Improvement Act of 2017.” The goal of the legislation according to its sponsors is to establish “minimum security requirements for federal procurements of connected devices.” Pointing to the growing number of connected devices and their use in prior cyber-attacks, the sponsors aims to provide flexible requirements that limit the vulnerabilities of such networks. Most specifically the bill requires all new Internet of Things (IoT) devices to be patchable, free of known vulnerabilities, and rely on standard protocols. Overall the legislation attempts to increase and standardize baseline security of connected devices, while still allowing innovation in the field to remain relatively permissionless. As Ryan Hagemann[1] at the Niskanen Center states, the bill is generally perceived as a step in the right direction in promoting security while limiting the potential harms of regulation to the overall innovation in the Internet of Things.

Continue reading →

The Mercatus Center at George Mason University has just released a new paper on, “Artificial Intelligence and Public Policy,” which I co-authored with Andrea Castillo O’Sullivan and Raymond Russell. This 54-page paper can be downloaded via the Mercatus website, SSRN, or ResearchGate. Here is the abstract:

There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later.

By Brent Skorup and Melody Calkins

Recently, the FCC sought comments for its Media Modernization Initiative in its effort to “eliminate or modify [media] regulations that are outdated, unnecessary, or unduly burdensome.” The regulatory thicket for TV distribution has long encumbered broadcast and cable providers. These rules encourage large, homogeneous cable TV bundles and burden cable and satellite operators with high compliance costs. (See the complex web of TV regulations at the Media Metrics website.)

One reason “skinny bundles” from online video providers and cable operators are attracting consumers is that online video circumvents the FCC’s Rube Goldberg-like system altogether. The FCC should end its 50-year experiment with TV regulation, which, among other things, has raised the cost of TV and degraded the First Amendment rights of media outlets.

The proposal to eliminate legacy media rules garnered a considerable amount of support from a wide range of commenters. In our filed reply comments, we identify four regulatory rules ripe for removal:

  • News distortion. This uncodified, under-the-radar rule allows the commission to revoke a broadcasters’ license if the FCC finds that a broadcaster deliberately engages in “news distortion, staging, or slanting.” The rule traces back to the FCC’s longstanding position that it can revoke licenses from broadcast stations if programming is not “in the public interest.”

    Though uncodified and not strictly enforced, the rule was reiterated in the FCC’s 2008 broadcast guidelines. The outline of the rule was laid out in the 1998 case Serafyn v. CBS, involving a complaint by a Ukrainian-American who alleged that the “60 Minutes” news program had unfairly edited interviews to portray Ukrainians as backwards and anti-Semitic. The FCC dismissed the complaint but DC Circuit Court reversed that dismissal and required FCC intervention. (CBS settled and the complaint was dropped before the FCC could intervene.)

    “Slanted” and distorted news can be found in (unregulated) cable news, newspapers, Twitter, and YouTube. The news distortion rule should be repealed and broadcasters should have regulatory parity (and their full First Amendment rights) restored.
  • Must-carry. The rule requires cable operators to distribute the programming of local broadcast stations at broadcasters’ request. (Stations carrying relatively low-value broadcast networks seek carriage via must-carry. Stations carrying popular networks like CBS and NBC can negotiate payment from cable operators via “retransmission consent” agreements.) Must-carry was narrowly sustained by the Supreme Court in 1994 against a First Amendment challenge, on the grounds that cable operators had monopoly power in the pay-TV market. Since then, however, cable’s market share shrank from 95% to 53%. Broadcast stations have far more options for distribution, including satellite TV, telco TV, and online distribution and it’s unlikely the rules would survive a First Amendment challenge today.
  • Network nonduplication and syndicated exclusivity. These rules limit how and when broadcast programming can be distributed and allow the FCC to intervene if a cable operator breaches a contract with a broadcast station. But the (exempted) distribution of hundreds of non-broadcast channels (e.g., CNN, MTV, ESPN) show that programmers and distributors are fully capable of forming private negotiations without FCC oversight. These rules simply make licensing negotiations more difficult and invite FCC intervention.

Finally, we identify retransmission consent regulations and compulsory licenses for repeal. Because “retrans” interacts with copyright matters outside of the FCC’s jurisdiction, we encourage the FCC work with the Copyright Office in advising Congress to repeal these statutes. Cable operators dislike the retrans framework and broadcasters dislike being compelled to license programming at regulated rates. These interventions simply aren’t needed (hundreds of cable and online-only TV channels operate outside of this framework) and neither the FCC nor the Copyright Office particularly likes being the referees in these fights. The FCC should break the stalemate and approach the Copyright Office about advocating for direct licensing of broadcast TV content.

My professional life is dedicated to researching the public policy implications of various emerging technologies. Of the many issues and sectors that I cover, none are more interesting or important than advanced medical innovation. After all, new health care technologies offer the greatest hope for improving human welfare and longevity. Consequently, the public policies that govern these technologies and sectors will have an important bearing on just how much life-enriching or life-saving medical innovation we actually get going forward.

Few people are doing better reporting on the intersection of advanced technology and medicine — as well as the effects of regulation on those fields — than my Mercatus Center colleague Jordan Reimschisel. In a very short period of time, Jordan has completely immersed himself in these complex, cutting-edge topics and produced a remarkable body of work discussing how, in his words, “technology can merge with medicine to democratize medical decision making, empower patients to participate in the treatment process, and promote better health outcomes for more patients at lower and lower costs.” He gets deep into the weeds of the various technologies he writes about as well as the legal, ethical, and economic issues surrounding each topic.

I encouraged him to start an ongoing compendium of his work on these topics so that we could continue to highlight his research, some of which I have been honored to co-author with him. I have listed his current catalog down below, but jump over to this Medium page he set up and bookmark it for future reference. This is some truly outstanding work and I am excited to see where he goes next with topics as wide-ranging as “biohackerspaces,” democratized or “personalized” medicine, advanced genetic testing and editing techniques, and the future of the FDA in an age of rapid change.

Give Jordan a follow on Twitter (@jtreimschisel) and make sure to follow his Medium page for his dispatches from the front lines of the debate over advanced medical innovation and its regulation.

Continue reading →

“First electricity, now telephones. Sometimes I feel as if I were living in an H.G. Wells novel.” –Dowager Countess, Downton Abbey

Every technology we take for granted was once new, different, disruptive, and often ridiculed and resisted as a result. Electricity, telephones, trains, and television all caused widespread fears once in the way robots, artificial intelligence, and the internet of things do today. Typically it is realized by most that these fears are misplaced and overly pessimistic, the technology gets diffused and we can barely remember our life without it. But in the recent technopanics, there has been a concern that the legal system is not properly equipped to handle the possible harms or concerns from these new technologies. As a result, there are often calls to regulate or rein in their use.

In the late 1980s, video cassette recorders (VCRs) caused a legal technopanic. The concerns were less that VCRs would lead to some bizarre human mutation as in many technopanics, but rather that the existing system of copyright infringement and vicarious liability could not adequately address the potential harm to the motion picture industry. The then president of the Motion Picture Association of America Jack Valenti famously told Congress, “I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone.”

Continue reading →

If the techno-pessimists are right and robots are set to take all the jobs, shouldn’t employment in Amazon warehouses be plummeting right now? After all, Amazon’s sorting and fulfillment centers have been automated at a rapid pace, with robotic technologies now being integrated into almost every facet of the process. (Just watch the video below to see it all in action.)

And yet according to this Wall Street Journal story by Laura Stevens, Amazon is looking to immediately fill 50,000 new jobs, which would mean that its U.S. workforce “would swell to around 300,000, compared with 30,000 in 2011.”  According to the article, “Nearly 40,000 of the promised jobs are full-time at the company’s fulfillment centers, including some facilities that will open in the coming months. Most of the remainder are part-time positions available at Amazon’s more than 30 sorting centers.”

How can this be? Shouldn’t the robots have eaten all those jobs by now?

Continue reading →