The house version of the Stop Enabling Sex Trafficking Act (SESTA), called the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), has undergone significant changes that appear to enable it to both truly address the scourge of online sex trafficking and maintain important internet liability protection that encourages a free and open internet. On Tuesday, this amended version passed the House Judiciary Committee. Like most legislation, this latest draft isn’t perfect. But it has made significant steps towards maintaining freedom online while addressing the misdeeds of a few.

Continue reading →

In 2015 after White House pressure, the FCC decided to take the radical step of classifying “broadband Internet access service” as a heavily-regulated Title II service. Title II was created for the AT&T long-distance monopoly and telegraph network and “promoting innovation and competition” is not its purpose. It’s ill-suited for the modern Internet, where hundreds of ISPs and tech companies are experimenting with new technologies and topologies.

Commissioner Brendan Carr was gracious enough to speak with Chris Koopman and me in a Mercatus podcast last week about his decision to vote to reverse the Title II classification. The podcast can be found at the Mercatus website. One highlight from Commissioner Carr:

Congress had a fork in the road. …In 1996, Congress made a decision that we’re going to head down the Title I route [for the Internet]. That decision has been one of the greatest public policy decisions that we’ve ever seen. That’s what led to the massive investment in the Internet. Over a trillion dollars invested. Consumers were protected. Innovators were free to innovate. Unfortunately, two years ago the Commission departed from that framework and moved into a very different heavy-handed regulatory world, the Title II approach.

Along those lines, in my recent ex parte meeting with Chairman Pai’s office, I pointed to an interesting 2002 study in the Review of Economics and Statistics from MIT Press about the stifling effects of Title II regulation:

[E]xisting economics scholarship suggests that a permissioned approach to new services, like that proposed in the [2015] Open Internet Order, inhibits innovation and new services in telecommunications. As a result of an FCC decision and a subsequent court decision in the late 1990s, for 18 to 30 months, depending on the firm, [Title II] carriers were deregulated and did not have to submit new offerings to the FCC for review. After the court decision, the FCC required carriers to file retroactive plans for services introduced after deregulation.

This turn of events allowed economist James Preiger to analyze and compare the rate of new services deployment in the regulated period and the brief deregulated period. Preiger found that “some otherwise profitable services are not financially viable under” the permissioned regime. Critically, the number of services carriers deployed “during the [deregulated] interim is 60%-99% larger than the model predicts they would have created” when preapproval was required. Finally, Preiger found that firms would have introduced 62% more services during the entire study period if there was no permissioned regime. This is suggestive evidence that the Order’s “Mother, May I?” approach will significantly harm the Internet services market.

Thankfully, this FCC has incorporated economic scholarship into its Restoring Internet Freedom Order and will undo the costly Title II classification for Internet services.

Over at Plain Text, I have posted a new essay entitled, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” It’s a preliminary sketch of some reform ideas that I have been working on as part of my next book project. The goal is to find some creative ways to move the ball forward on the innovation policy front, regardless of what level of government we are talking about.

To maximize the potential for ongoing, positive change and create a policy environment conducive to permissionless innovation, I argue that policymakers should pursue policy reforms based on these three ideas:

  1. The Innovator’s PresumptionAny person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.
  2. The Sunsetting ImperativeAny existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.
  3. The Parity ProvisionAny operator offering a similarly situated product or service should be regulated no more stringently than its least regulated competitor.

These provisions are crafted in a somewhat generic fashion in the hope that these reform proposals could be modified and adopted by various legislative or regulatory bodies. If you are interested in reading more details about each proposal, jump over to Plain Text to read the entire essay.

As I have previously written about, a bill currently up for debate in Congress runs the risk of gutting critical liability protections for internet intermediaries. Earlier today the Stop Enabling Sex Traffickers Act passed out of committee with an amendment attempted to remedy some of the most damaging changes to Section 230 in the original act. While this amendment has gained support from some industry groups, it does not fully address the concerns regarding changes to intermediary liability under Section 230. While the amended version shows increased awareness of the far reaching consequences of the act, it does not fully address issues that could have a chilling effect on speech on the internet and risk stifling future internet innovation.

Continue reading →

Tesla, Volvo, and Cadillac have all released a vehicle with features that push them beyond the standard level 2 features and nearing a level 3 “self-driving” automation system where the driver is still needs to be there, but the car can do most of the work. While there have been some notable accidents, most of these were tied to driver errors or behavior and not the technology. Still autonomous vehicles hold the promise of potentially reducing traffic accidents by more than 90% if widely adopted. However, fewer accidents and a reduction in the potential for human error in driving could change the function and formulas of the auto insurance market.

Continue reading →

Broadcast license renewal challenges have troubled libertarians and free speech advocates for decades. Despite our efforts (and our law journal articles on the abuse of the licensing process), license challenges are legal. In fact, political parties, prior FCCs, and activist groups have encouraged license challenges based on TV content to ensure broadcasters are operating in “the public interest.” Further, courts have compelled and will compel a reluctant FCC to investigate “news distortion” and other violations of FCC broadcast rules. It’s a troubling state of affairs that has been pushed back into relevancy because FCC license challenges are in the news.

In recent years the FCC, whether led by Democrats or Republicans, has preferred to avoid tricky questions surrounding license renewals. Chairman Pai, like most recent FCC chairs, has been an outspoken defender of First Amendment protections and norms. He opposed, for instance, the Obama FCC’s attempt to survey broadcast newsrooms about their coverage. He also penned an op-ed bringing attention to the fact that federal NSF funding was being used by left-leaning researchers to monitor and combat “misinformation and propaganda” on social media.

The silence of the Republican commissioners today about license renewals is likely primarily because they have higher priorities (like broadband deployment and freeing up spectrum) than intervening in the competitive media marketplace. But second, and less understood, is because whether to investigate a news station isn’t really up to them. Courts can overrule them and compel an investigation.

Political actors have used FCC licensing procedures for decades to silence political opponents and unfavorable media. For reasons I won’t explore here, TV and radio broadcasters have diminished First Amendment rights and the public is permitted to challenge their licenses at renewal time.

So, progressive “citizens groups” even in recent years have challenged license renewals for broadcasters for “one-sided programming.” Unfortunately, it works. For instance, in 2004 the promises of multi-year renewal challenges from outside groups and the risk of payback from a Democrat FCC forced broadcast stations to trim a documentary critical of John Kerry from 40 minutes to 4 minutes. And, unlike their cable counterparts, broadcasters censor nude scenes in TV and movies because even a Janet Jackson Superbowl scenario can lead to expensive license challenges.

These troubling licensing procedures and pressure points were largely unknown to most people, but, on October 11, President Trump tweeted:

“With all of the Fake News coming out of NBC and the Networks, at what point is it appropriate to challenge their License? Bad for country!”

So why hasn’t the FCC said they won’t investigate NBC and other broadcast station owners? It may be because courts can compel the FCC to investigate “news distortion.”

This is exactly what happened to the Clinton FCC. As Melody Calkins and I wrote in August about the FCC’s news distortion rule:

Though uncodified and not strictly enforced, the rule was reiterated in the FCC’s 2008 broadcast guidelines. The outline of the rule was laid out in the 1998 case Serafyn v. CBS, involving a complaint by a Ukrainian-American who alleged that the “60 Minutes” news program had unfairly edited interviews to portray Ukrainians as backwards and anti-Semitic. The FCC dismissed the complaint but DC Circuit Court reversed that dismissal and required FCC intervention. (CBS settled and the complaint was dropped before the FCC could intervene.)

The commissioners might personally wish broadcasters had full First Amendment protections and want to dismiss all challenges but current law permits and encourages license challenges. The commission can be compelled to act because of the sins of omission of prior FCCs: deciding to retain the news distortion rule and other antiquated “public interest” regulations for broadcasters. The existence of these old media rules mean the FCC’s hands are tied.

In recent months, I’ve come across a growing pool of young professionals looking to enter the technology policy field. Although I was lucky enough to find a willing and capable mentor to guide me through a lot of the nitty gritty, a lot of these would-be policy entrepreneurs haven’t been as lucky. Most of them are keen on shifting out of their current policy area, or are newcomers to Washington, D.C. looking to break into a technology policy career track. This is a town where there’s no shortage of sage wisdom, and while much of it still remains relevant to new up-and-comers, I figured I would pen these thoughts based on my own experiences as a relative newcomer to the D.C. tech policy community.

I came to D.C. in 2013, originally spurred by the then-recent revelations of mass government surveillance revealed by Edward Snowden’s NSA leaks. That event led me to the realization that the Internet was fragile, and that engaging in the battle of ideas in D.C. might be a career calling. So I packed up and moved to the nation’s capital, intent on joining the technology policy fray. When I arrived, however, I was immediately struck by the almost complete lack of jobs in, and focus on, technology issues in libertarian circles.

Through a series of serendipitous and fortuitous circumstances, I managed to ultimately break into a field that was still a small and relatively under-appreciated group. What we lacked in numbers and support we had to make up for in quality and determined effort. Although the tech policy community has grown precipitously in recent years, this is still a relatively niche policy vocation relative to other policy tracks. That means there’s a lot of potential for rapid professional growth—if you can manage to get your foot in the door.

So if you’re interested in breaking into technology policy, here are some thoughts that might be of help. Continue reading →

The Mercatus Center at George Mason University has just released a new paper on,”Permissionless Innovation and Immersive Technology: Public Policy for Virtual and Augmented Reality,” which I co-authored with Jonathan Camp. This 53-page paper can be downloaded via the Mercatus websiteSSRN or Research Gate.

Here is the abstract for the paper:

Immersive technologies such as augmented reality, virtual reality, and mixed reality are finally taking off. As these technologies become more widespread, concerns will likely develop about their disruptive social and economic effects. This paper addresses such policy concerns and contrasts two different visions for governing immersive tech going forward. The paper makes the case for permissionless innovation, or the general freedom to innovate without prior constraint, as the optimal policy default to maximize the benefits associated with immersive technologies.

The alternative vision — the so-called precautionary principle — would be an inappropriate policy default because it would greatly limit the potential for beneficial applications and uses of these new technologies to emerge rapidly. Public policy for immersive technology should not be based on hypothetical worst-case scenarios. Rather, policymakers should wait to see which concerns or harms emerge and then devise ex post solutions as needed.

To better explain why precautionary controls on these emerging technologies would be such a mistake, Camp and I provide an inventory of the many VR, AR, and mixed reality applications that are already on the market–or soon could be–and which could provide society with profound benefits. A few examples include:  Continue reading →

Internet regulation advocates are trying to turn a recent FCC Notice of Inquiry about the state of US telecommunications services into a controversy. Twelve US Senators have accused the FCC of wanting to “redefin[e] broadband” in order to “abandon further efforts to connect Americans.”

Considering Chairman Pai and the Commission are already considering actions to accelerate the deployment of broadband, with new proceedings and the formation of the Broadband Deployment Advisory Committee, the allegation that the current NOI is an excuse for inaction is perplexing.

The true “controversy” is much more mundane–reasonable people disagree about what congressional neologisms like “advanced telecommunications capability” mean. The FCC must interpret and apply the indeterminate language of Section 706 of the Telecommunications Act, which requires the FCC about whether to determine “whether advanced telecommunications capability is being deployed in a reasonable and timely fashion.” If the answer is negative, the agency must “take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market.” The inquiry is reported in an annual “Broadband Progress Report.” Much of the “scandal” of this proceeding is confusion about what “broadband” means.

What is broadband?

First: what qualifies as “broadband” download speed? It depends.

The OECD says anything above 256 kbps.

ITU standards set it at above 1.5 Mbps (or is 2.0 Mbps?).

In the US, broadband is generally defined as a higher speed. The USDA’s Rural Utilities Service defines it as 4.0 Mbps.

The FCC’s 2015 Broadband Progress Report found, as Obama FCC officials put it, that “the FCC’s definition of broadband” is now 25 Mbps. This is why advocates insist “broadband access” includes only wireline services above 25 Mbps.

But in the same month, the Obama FCC determined in the Open Internet Order that anything above dialup speed–56 kbps–is “broadband Internet access service.”

So, according to regulation advocates, 1.5 Mbps DSL service isn’t “broadband access” service but it is “broadband Internet access service.” Likewise a 30 Mbps 4G LTE connection isn’t a “broadband access” service but it is “broadband Internet access service.”

In other words, the word games about “broadband” are not coming from the Trump FCC. There is no consistency for what “broadband” means because prior FCCs kept changing the definition, and even use the term differently in different proceedings. As the Obama FCC said in 2009, “In previous reports to Congress, the Commission used the terms ‘broadband,’ ‘advanced telecommunications capability,’ and ‘advanced services’ interchangeably.”

Instead, what is going on is that the Trump FCC is trying to apply Section 706 to the current broadband market. The main questions are, what is advanced telecommunications capability, and is it “being deployed in a reasonable and timely fashion”?

Is mobile broadband an “advanced telecommunications capability”?

Previous FCCs declined to adopt a speed benchmark for when wireless service satisfies the “advanced telecommunications capability” definition. The so-called controversy is because the latest NOI revisits this omission in light of consumer trends. The NOI straightforwardly asks whether mobile broadband above 10 Mbps satisfies the statutory definition of “advanced telecommunications capability.”

For that, the FCC must consult the statute. Such a capability, the statute says, is technology-neutral (i.e. includes wireless and “fixed” connections) and “enables users to originate and receive high-quality voice, data, graphics, and video telecommunications.”

Historically, since the statute doesn’t provide much precision, the FCC has examined subscription rates of various broadband speeds and services. From 2010 to 2015, the Obama FCCs defined advanced telecommunications capability as a fixed connection of 4 Mbps. In 2015, as mentioned, that benchmark was raised 25 Mbps.

Regulation advocates fear that if the FCC looks at subscription rates, the agency might find that mobile broadband above 10 Mbps is an advanced telecommunications capability. This finding, they feel, would undermine the argument that the US broadband market needs intense regulation. According to recent Pew surveys, 12% of adults–about 28 million people–are “wireless only” and don’t have a wireline subscription. Those numbers certainly raise the possibility that mobile broadband is an advanced telecommunications capability.

Let’s look at the three fixed broadband technologies that “pass” the vast majority of households–cable modem, DSL, and satellite–and narrow the data to connections 10 Mbps or above.*

Home broadband connections (10 Mbps+)
Cable modem – 54.4 million
DSL – 11.8 million
Satellite – 1.4 million

It’s hard to know for sure since Pew measures adult individuals and the FCC measures households, but it’s possible more people have 4G LTE as home broadband (about 28 million adults and their families) than have 10 Mbps+ DSL as home broadband (11.8 million households).

Subscription rates aren’t the end of the inquiry, but the fact that millions of households are going mobile-only rather than DSL or cable modem is suggestive evidence that mobile broadband offers an advanced telecommunications capability. (Considering T-Mobile is now providing 50 GB of data per line per month, mobile-only household growth will likely accelerate.)

Are high-speed services “being deployed in a reasonable and timely fashion”?

The second inquiry is whether these advanced telecommunications capabilities “are being deployed in a reasonable and timely fashion.” Again, the statute doesn’t give much guidance but consumer adoption of high-speed wireline and wireless broadband has been impressive.

So few people had 25 Mbps for so long that the FCC didn’t record it in its Internet Access Services reports until 2011. At the end of 2011, 6.3 million households subscribed to 25 Mbps. Less than five years later, in June 2016, over 56 million households subscribed. In the last year alone, fixed providers extended 25 Mbps or greater speeds to 21 million households.

The FCC is not completely without guidance on this question. As part of the 2008 Broadband Data Services Improvement Act, Congress instructed the FCC to use international comparisons in its Section 706 Report. International comparisons also suggest that the US is deploying advanced telecommunications capability in a timely manner. For instance, according to the OECD the US has 23.4 fiber and cable modem connections per 100 inhabitants, which far exceeds the OECD average, 16.2 per 100 inhabitants.**

Anyways, the sky is not falling because the FCC is asking about mobile broadband subscription rates. More can be done to accelerate broadband–particularly if the government frees up more spectrum and local governments improve their permitting processes–but the Section 706 inquiry offers little that is controversial or new.

 

*Fiber and fixed wireless connections, 9.6 million and 0.3 million subscribers, respectively, are also noteworthy but these 10 Mbps+ technologies only cover certain areas of the country.

**America’s high rank in the OECD is similar if DSL is included, but the quality of DSL varies widely and often doesn’t provide 10 Mbps or 25 Mbps speeds.

Hurricanes Harvey and Irma mark the first time two Category 4 hurricanes have made U.S. landfall in the same year. Currently the estimates are the two hurricanes have caused between $150 and $200 million in damages.

If there is any positive story within these horrific disasters, it is that these events have seen a renewed sense of community and an outpouring of support from across the nation. From the recent star studded Hand-in-Hand relief concert and JJ Watts Twitter fundraiser to smaller efforts by local marching bands and police departments in faraway states.

What has made these disaster relief efforts different from past hurricanes? These recent efforts have been enabled by technology that was unavailable during past disasters, such as Hurricane Katrina. Continue reading →