As I have previously written about, a bill currently up for debate in Congress runs the risk of gutting critical liability protections for internet intermediaries. Earlier today the Stop Enabling Sex Traffickers Act passed out of committee with an amendment attempted to remedy some of the most damaging changes to Section 230 in the original act. While this amendment has gained support from some industry groups, it does not fully address the concerns regarding changes to intermediary liability under Section 230. While the amended version shows increased awareness of the far reaching consequences of the act, it does not fully address issues that could have a chilling effect on speech on the internet and risk stifling future internet innovation.

Continue reading →

Tesla, Volvo, and Cadillac have all released a vehicle with features that push them beyond the standard level 2 features and nearing a level 3 “self-driving” automation system where the driver is still needs to be there, but the car can do most of the work. While there have been some notable accidents, most of these were tied to driver errors or behavior and not the technology. Still autonomous vehicles hold the promise of potentially reducing traffic accidents by more than 90% if widely adopted. However, fewer accidents and a reduction in the potential for human error in driving could change the function and formulas of the auto insurance market.

Continue reading →

Broadcast license renewal challenges have troubled libertarians and free speech advocates for decades. Despite our efforts (and our law journal articles on the abuse of the licensing process), license challenges are legal. In fact, political parties, prior FCCs, and activist groups have encouraged license challenges based on TV content to ensure broadcasters are operating in “the public interest.” Further, courts have compelled and will compel a reluctant FCC to investigate “news distortion” and other violations of FCC broadcast rules. It’s a troubling state of affairs that has been pushed back into relevancy because FCC license challenges are in the news.

In recent years the FCC, whether led by Democrats or Republicans, has preferred to avoid tricky questions surrounding license renewals. Chairman Pai, like most recent FCC chairs, has been an outspoken defender of First Amendment protections and norms. He opposed, for instance, the Obama FCC’s attempt to survey broadcast newsrooms about their coverage. He also penned an op-ed bringing attention to the fact that federal NSF funding was being used by left-leaning researchers to monitor and combat “misinformation and propaganda” on social media.

The silence of the Republican commissioners today about license renewals is likely primarily because they have higher priorities (like broadband deployment and freeing up spectrum) than intervening in the competitive media marketplace. But second, and less understood, is because whether to investigate a news station isn’t really up to them. Courts can overrule them and compel an investigation.

Political actors have used FCC licensing procedures for decades to silence political opponents and unfavorable media. For reasons I won’t explore here, TV and radio broadcasters have diminished First Amendment rights and the public is permitted to challenge their licenses at renewal time.

So, progressive “citizens groups” even in recent years have challenged license renewals for broadcasters for “one-sided programming.” Unfortunately, it works. For instance, in 2004 the promises of multi-year renewal challenges from outside groups and the risk of payback from a Democrat FCC forced broadcast stations to trim a documentary critical of John Kerry from 40 minutes to 4 minutes. And, unlike their cable counterparts, broadcasters censor nude scenes in TV and movies because even a Janet Jackson Superbowl scenario can lead to expensive license challenges.

These troubling licensing procedures and pressure points were largely unknown to most people, but, on October 11, President Trump tweeted:

“With all of the Fake News coming out of NBC and the Networks, at what point is it appropriate to challenge their License? Bad for country!”

So why hasn’t the FCC said they won’t investigate NBC and other broadcast station owners? It may be because courts can compel the FCC to investigate “news distortion.”

This is exactly what happened to the Clinton FCC. As Melody Calkins and I wrote in August about the FCC’s news distortion rule:

Though uncodified and not strictly enforced, the rule was reiterated in the FCC’s 2008 broadcast guidelines. The outline of the rule was laid out in the 1998 case Serafyn v. CBS, involving a complaint by a Ukrainian-American who alleged that the “60 Minutes” news program had unfairly edited interviews to portray Ukrainians as backwards and anti-Semitic. The FCC dismissed the complaint but DC Circuit Court reversed that dismissal and required FCC intervention. (CBS settled and the complaint was dropped before the FCC could intervene.)

The commissioners might personally wish broadcasters had full First Amendment protections and want to dismiss all challenges but current law permits and encourages license challenges. The commission can be compelled to act because of the sins of omission of prior FCCs: deciding to retain the news distortion rule and other antiquated “public interest” regulations for broadcasters. The existence of these old media rules mean the FCC’s hands are tied.

In recent months, I’ve come across a growing pool of young professionals looking to enter the technology policy field. Although I was lucky enough to find a willing and capable mentor to guide me through a lot of the nitty gritty, a lot of these would-be policy entrepreneurs haven’t been as lucky. Most of them are keen on shifting out of their current policy area, or are newcomers to Washington, D.C. looking to break into a technology policy career track. This is a town where there’s no shortage of sage wisdom, and while much of it still remains relevant to new up-and-comers, I figured I would pen these thoughts based on my own experiences as a relative newcomer to the D.C. tech policy community.

I came to D.C. in 2013, originally spurred by the then-recent revelations of mass government surveillance revealed by Edward Snowden’s NSA leaks. That event led me to the realization that the Internet was fragile, and that engaging in the battle of ideas in D.C. might be a career calling. So I packed up and moved to the nation’s capital, intent on joining the technology policy fray. When I arrived, however, I was immediately struck by the almost complete lack of jobs in, and focus on, technology issues in libertarian circles.

Through a series of serendipitous and fortuitous circumstances, I managed to ultimately break into a field that was still a small and relatively under-appreciated group. What we lacked in numbers and support we had to make up for in quality and determined effort. Although the tech policy community has grown precipitously in recent years, this is still a relatively niche policy vocation relative to other policy tracks. That means there’s a lot of potential for rapid professional growth—if you can manage to get your foot in the door.

So if you’re interested in breaking into technology policy, here are some thoughts that might be of help. Continue reading →

The Mercatus Center at George Mason University has just released a new paper on,”Permissionless Innovation and Immersive Technology: Public Policy for Virtual and Augmented Reality,” which I co-authored with Jonathan Camp. This 53-page paper can be downloaded via the Mercatus websiteSSRN or Research Gate.

Here is the abstract for the paper:

Immersive technologies such as augmented reality, virtual reality, and mixed reality are finally taking off. As these technologies become more widespread, concerns will likely develop about their disruptive social and economic effects. This paper addresses such policy concerns and contrasts two different visions for governing immersive tech going forward. The paper makes the case for permissionless innovation, or the general freedom to innovate without prior constraint, as the optimal policy default to maximize the benefits associated with immersive technologies.

The alternative vision — the so-called precautionary principle — would be an inappropriate policy default because it would greatly limit the potential for beneficial applications and uses of these new technologies to emerge rapidly. Public policy for immersive technology should not be based on hypothetical worst-case scenarios. Rather, policymakers should wait to see which concerns or harms emerge and then devise ex post solutions as needed.

To better explain why precautionary controls on these emerging technologies would be such a mistake, Camp and I provide an inventory of the many VR, AR, and mixed reality applications that are already on the market–or soon could be–and which could provide society with profound benefits. A few examples include:  Continue reading →

Internet regulation advocates are trying to turn a recent FCC Notice of Inquiry about the state of US telecommunications services into a controversy. Twelve US Senators have accused the FCC of wanting to “redefin[e] broadband” in order to “abandon further efforts to connect Americans.”

Considering Chairman Pai and the Commission are already considering actions to accelerate the deployment of broadband, with new proceedings and the formation of the Broadband Deployment Advisory Committee, the allegation that the current NOI is an excuse for inaction is perplexing.

The true “controversy” is much more mundane–reasonable people disagree about what congressional neologisms like “advanced telecommunications capability” mean. The FCC must interpret and apply the indeterminate language of Section 706 of the Telecommunications Act, which requires the FCC about whether to determine “whether advanced telecommunications capability is being deployed in a reasonable and timely fashion.” If the answer is negative, the agency must “take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market.” The inquiry is reported in an annual “Broadband Progress Report.” Much of the “scandal” of this proceeding is confusion about what “broadband” means.

What is broadband?

First: what qualifies as “broadband” download speed? It depends.

The OECD says anything above 256 kbps.

ITU standards set it at above 1.5 Mbps (or is 2.0 Mbps?).

In the US, broadband is generally defined as a higher speed. The USDA’s Rural Utilities Service defines it as 4.0 Mbps.

The FCC’s 2015 Broadband Progress Report found, as Obama FCC officials put it, that “the FCC’s definition of broadband” is now 25 Mbps. This is why advocates insist “broadband access” includes only wireline services above 25 Mbps.

But in the same month, the Obama FCC determined in the Open Internet Order that anything above dialup speed–56 kbps–is “broadband Internet access service.”

So, according to regulation advocates, 1.5 Mbps DSL service isn’t “broadband access” service but it is “broadband Internet access service.” Likewise a 30 Mbps 4G LTE connection isn’t a “broadband access” service but it is “broadband Internet access service.”

In other words, the word games about “broadband” are not coming from the Trump FCC. There is no consistency for what “broadband” means because prior FCCs kept changing the definition, and even use the term differently in different proceedings. As the Obama FCC said in 2009, “In previous reports to Congress, the Commission used the terms ‘broadband,’ ‘advanced telecommunications capability,’ and ‘advanced services’ interchangeably.”

Instead, what is going on is that the Trump FCC is trying to apply Section 706 to the current broadband market. The main questions are, what is advanced telecommunications capability, and is it “being deployed in a reasonable and timely fashion”?

Is mobile broadband an “advanced telecommunications capability”?

Previous FCCs declined to adopt a speed benchmark for when wireless service satisfies the “advanced telecommunications capability” definition. The so-called controversy is because the latest NOI revisits this omission in light of consumer trends. The NOI straightforwardly asks whether mobile broadband above 10 Mbps satisfies the statutory definition of “advanced telecommunications capability.”

For that, the FCC must consult the statute. Such a capability, the statute says, is technology-neutral (i.e. includes wireless and “fixed” connections) and “enables users to originate and receive high-quality voice, data, graphics, and video telecommunications.”

Historically, since the statute doesn’t provide much precision, the FCC has examined subscription rates of various broadband speeds and services. From 2010 to 2015, the Obama FCCs defined advanced telecommunications capability as a fixed connection of 4 Mbps. In 2015, as mentioned, that benchmark was raised 25 Mbps.

Regulation advocates fear that if the FCC looks at subscription rates, the agency might find that mobile broadband above 10 Mbps is an advanced telecommunications capability. This finding, they feel, would undermine the argument that the US broadband market needs intense regulation. According to recent Pew surveys, 12% of adults–about 28 million people–are “wireless only” and don’t have a wireline subscription. Those numbers certainly raise the possibility that mobile broadband is an advanced telecommunications capability.

Let’s look at the three fixed broadband technologies that “pass” the vast majority of households–cable modem, DSL, and satellite–and narrow the data to connections 10 Mbps or above.*

Home broadband connections (10 Mbps+)
Cable modem – 54.4 million
DSL – 11.8 million
Satellite – 1.4 million

It’s hard to know for sure since Pew measures adult individuals and the FCC measures households, but it’s possible more people have 4G LTE as home broadband (about 28 million adults and their families) than have 10 Mbps+ DSL as home broadband (11.8 million households).

Subscription rates aren’t the end of the inquiry, but the fact that millions of households are going mobile-only rather than DSL or cable modem is suggestive evidence that mobile broadband offers an advanced telecommunications capability. (Considering T-Mobile is now providing 50 GB of data per line per month, mobile-only household growth will likely accelerate.)

Are high-speed services “being deployed in a reasonable and timely fashion”?

The second inquiry is whether these advanced telecommunications capabilities “are being deployed in a reasonable and timely fashion.” Again, the statute doesn’t give much guidance but consumer adoption of high-speed wireline and wireless broadband has been impressive.

So few people had 25 Mbps for so long that the FCC didn’t record it in its Internet Access Services reports until 2011. At the end of 2011, 6.3 million households subscribed to 25 Mbps. Less than five years later, in June 2016, over 56 million households subscribed. In the last year alone, fixed providers extended 25 Mbps or greater speeds to 21 million households.

The FCC is not completely without guidance on this question. As part of the 2008 Broadband Data Services Improvement Act, Congress instructed the FCC to use international comparisons in its Section 706 Report. International comparisons also suggest that the US is deploying advanced telecommunications capability in a timely manner. For instance, according to the OECD the US has 23.4 fiber and cable modem connections per 100 inhabitants, which far exceeds the OECD average, 16.2 per 100 inhabitants.**

Anyways, the sky is not falling because the FCC is asking about mobile broadband subscription rates. More can be done to accelerate broadband–particularly if the government frees up more spectrum and local governments improve their permitting processes–but the Section 706 inquiry offers little that is controversial or new.

 

*Fiber and fixed wireless connections, 9.6 million and 0.3 million subscribers, respectively, are also noteworthy but these 10 Mbps+ technologies only cover certain areas of the country.

**America’s high rank in the OECD is similar if DSL is included, but the quality of DSL varies widely and often doesn’t provide 10 Mbps or 25 Mbps speeds.

Hurricanes Harvey and Irma mark the first time two Category 4 hurricanes have made U.S. landfall in the same year. Currently the estimates are the two hurricanes have caused between $150 and $200 million in damages.

If there is any positive story within these horrific disasters, it is that these events have seen a renewed sense of community and an outpouring of support from across the nation. From the recent star studded Hand-in-Hand relief concert and JJ Watts Twitter fundraiser to smaller efforts by local marching bands and police departments in faraway states.

What has made these disaster relief efforts different from past hurricanes? These recent efforts have been enabled by technology that was unavailable during past disasters, such as Hurricane Katrina. Continue reading →

Congress is poised to act on “driverless car” legislation that might help us achieve one of the greatest public health success stories of our lifetime by bringing down the staggering costs associated with car crashes.

The SELF DRIVE Act currently awaiting a vote in the House of Representatives would pre-empt the existing state laws concerning driverless cars and replace these state laws with a federal standard. The law would formalize the existing NHTSA standards for driverless cars and establish their role as the regulator of the design, construction, and performance of this technology. The states would become regulators for driverless cars and its technology in the same way as they are for current driver operated motor vehicles.

It is important we get policy right on this front because motor vehicle accidents result in over 35,000 deaths and over 2 million injuries each year. These numbers continue to rise as more people hit the roads due to lower gas prices and as more distractions while driving emerge. The National Highway Traffic Safety Administration (NHTSA) estimates 94 percent of these crashes are caused by driver error.

Driverless cars provide a potential solution to this tragedy. One study estimated that widespread adoption of such technology would avoid about 28 percent of all motor vehicle accidents and prevent nearly 10,000 deaths each year. This lifesaving technology may be generally available sooner than expected if innovators are allowed to freely develop it.

Continue reading →

Are you interested in emerging technologies and the public policy issues surrounding them? Then come to work with me at the Mercatus Center at George Mason University!

The Mercatus Center is currently looking to hire a new Senior Research Fellow in our Technology Policy Program. Our tech policy team covers a large and growing array of cutting-edge issues, including: robotics, AI, and autonomous vehicles; commercial drones; the Internet of Things; virtual reality; cryptocurrencies; the Sharing Economy; 3D printing; and advanced medical and health technologies, just to name a few current priorities.

But the most exciting—but challenging—thing about covering tech policy is that the landscape of issues and concerns is always morphing and growing. Our new Senior Research Fellow will help our team determine our tech policy priorities going forward and then be responsible for engaging in scholarly work and public speaking on those topics.

All the finer details about this new position are listed on the Mercatus website. If you’re interested and qualified, please apply! Or, if you know of others who might be interested in this position, please forward this notice along to them.

On August 1, Sens. Mark Warner and Cory Gardner introduced the “Internet of Things Cybersecurity Improvement Act of 2017.” The goal of the legislation according to its sponsors is to establish “minimum security requirements for federal procurements of connected devices.” Pointing to the growing number of connected devices and their use in prior cyber-attacks, the sponsors aims to provide flexible requirements that limit the vulnerabilities of such networks. Most specifically the bill requires all new Internet of Things (IoT) devices to be patchable, free of known vulnerabilities, and rely on standard protocols. Overall the legislation attempts to increase and standardize baseline security of connected devices, while still allowing innovation in the field to remain relatively permissionless. As Ryan Hagemann[1] at the Niskanen Center states, the bill is generally perceived as a step in the right direction in promoting security while limiting the potential harms of regulation to the overall innovation in the Internet of Things.

Continue reading →