Building broadband takes time. There’s permitting, environmental reviews, engineering, negotiations with city officials and pole owners, and other considerations.

That said, temporary wireless broadband systems can be set up quickly, sometimes in days and weeks, not months or years like wireline networks. Setting up outdoor WiFi, as some schools have done (HT Billy Easley II), is a good step but WiFi has its limits and more can be done.

The FCC has done a great job freeing up more spectrum on a temporary basis for the COVID-19 crisis, like allowing carriers to use Dish’s unused cellular spectrum. Wireless systems need more than spectrum, however. Operators need real estate, electricity, backhaul, and permission. This is where cities, counties, and states can help.

Waive or simplify permitting

States, counties, and cities should consider waiving or simplifying their permitting for temporary wireless systems, particularly in rural or low-income areas where adoption lags.

Cellular providers set up Distributed Antenna Systems (DAS) and Cells on Wheels (COWs) for events like football games, parades, festivals, and emergency response after hurricanes. These provide good coverage and capacity in a pinch.

There are other ad hoc wireless systems that can be set up quickly in local areas, like WISP transmitters, cellular or WISP backhaul, outdoor WiFi, and mesh networks.

Broadband to-go.

Allow rent-free access to municipal property

Public agencies own real estate and buildings that would lend themselves to temporary wireless facilities. Not only do they have power, taller public buildings and water towers allow wireless systems to have greater coverage. Cities should consider leasing out temporary space rent free for the duration of the crisis.

Many cities and counties also have a dark fiber and lit fiber networks that serve public facilities like police, fire, and hospitals. If there’s available capacity, state and local public agencies should consider providing cheap or free access to the municipal fiber network.

Now, these temporary measures won’t work miracles. Operators are looking at months of cash constraints and probably don’t have many field technicians available. But the temporary waiver of permitting and the easy access to public property could provide quick, needed broadband capacity in rural and hard-to-reach areas.

To commemorate its 40th anniversary, the Mercatus Center asked its scholars to share the books that have been most influential or formative in the development of their analytical approach and worldview. Head over to the Mercatus website to check out my complete write-up of my Top 5 picks for books that influenced my thinking on innovation policy progress studies. But here is a quick summary:

#1) Samuel C. Florman – “The Existential Pleasures of Engineering” (1976). His book surveys “antitechnologists” operating in several academic fields & then proceeds to utterly demolish their claims with remarkable rigor and wit.

#2) Aaron Wildavsky – “Searching for Safety” (1988). The most trenchant indictment of the “precautionary principle” ever penned. His book helped to reshape the way risk analysts would think about regulatory trade-offs going forward.

#3) Thomas Sowell – “A Conflict of Visions: Ideological Origins of Political Struggles” (1987). It’s like the Rosetta Stone of political theory; the key to deciphering why people think the way they do about human nature, economics, and politics.  

#4) Virginia Postrel – “The Future and Its Enemies” (1998). Postrel reconceptualized the debate over progress as not Left vs. Right but rather dynamism— “a world of constant creation, discovery, and competition”—versus the stasis mentality. More true now than ever before.

#5) Calestous Juma – “Innovation and Its Enemies” (2016). A magisterial history of earlier battles over progress. Juma reminds us of the continued importance of “oiling the wheels of novelty” to constantly replenish the well of important ideas and innovations.

The future needs friends because the enemies of innovative dynamism are voluminous and vociferous. It is a lesson we must never forget. Thanks to these five authors and their books, we never will.

Finally, the influence of these scholars is evident on every page of my last book (“Permissionless Innovation”) and my new one (“Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments”). I thank them all!

In a new essay in The Dallas Morning News (“Licensing restrictions for health care workers need to be flexible to fight coronavirus“), Trace Mitchell and I discuss recent efforts to reform occupational licensing restrictions for health care workers to help fight the coronavirus. Trace and I have written extensively about the need for licensing flexibility over the past couple of years, but it is needed now more than ever. Luckily, some positive reforms are now underway.

We highlight efforts in states like Massachusetts and Texas to reform their occupational licensing rules in response to the crisis, as well as federal reforms aimed at allowing reciprocity across state lines. We conclude by noting that:

It should not take a crisis of this magnitude for policymakers to reconsider the way we prevent fully qualified medical professionals from going where they are most needed. But that moment is now upon us. More leaders would be wise to conduct a comprehensive review of regulatory burdens that hinder sensible, speedy responses to the coronavirus crisis.

If nothing else, the relaxation of these rules should give us a better feel for how necessary strict licensing requirements truly are. Chances are, we will learn just how costly the regulations have been all along.
Read the entire piece here.

I saw a Bloomberg News report that officials in Austria and Italy are seeking (aggregated, anonymized) users’ location data from cellphone companies to see if local and national lockdowns are effective.

It’s an interesting idea that raises some possibilities for US officials and tech companies to consider to combat the crisis in the US. Caveat: these are very preliminary thoughts.

Cellphone location data from a phone company is OK but imprecise about your movements. It can show where you are typically in a mile or half-mile area. 

But smartphone app location is much more precise since it uses GPS, not cell towers to show movements. Apps with location services can show people’s movements within meters, not half-mile, like cell towers.I suspect 90%+ of smartphone users have GPS location services on (Google Maps, Facebook, Yelp, etc.). App companies have rich datasets of daily movements of people.

Step 1 – App companies isolate and share location trends with health officials

This would need to be aggregated and anonymized of course. Tech companies with health officials should, as Balaji Srinivasan says, identify red and green zones. The point is not to identify individuals but make generalizations about whether a neighborhood or town is practicing good distancing practices.

Step 2 – In green zones, where infection/hospitalization are low and app data says people are strictly distancing, COVID-19 tests.

If people are spending 22 hours not moving except for brief visits to the grocery store and parks, that’s a good neighborhood. We need tests distributed daily in non-infected areas, perhaps at grocery stores and via USPS and Amazon deliveries. As soon as the tests production ramps up, tests need to flood into the areas that are healthy. This achieves two things:

  • Asymptomatic people who might spread can stay home.
  • Non-infected people can start returning to work and a life of semi-normalcy of movement with confidence that others who are out are non-contagious.

Step 3 – In red zones, where infection/hospitalization is high and people aren’t strictly distancing, public education and restrictions.

At least in Virginia, there is county-level data about where the hotspots are. I expect other states know the counties and neighborhoods that are hit hard. Where there’s overlap of these areas not distancing, step up distancing and restrictions.

That still leaves open what to do about yellow zones that are adjacent to red zones, but the main priority should be to identify the green and red. The longer health officials and the public are flying blind with no end in sight, people get frustrated, lose jobs, shutter businesses, and violate distancing rules.

To help slow the spread of the coronavirus, the GMU campus is moving to remote instruction and Mercatus is moving to remote work for employees until the risk subsides. GMU and Mercatus employees join thousands of other universities and businesses this week. Millions of people will be working from home and it will be a major test of American broadband and cellular networks. 

There will likely be a loss of productivity nationwide–some things just can’t be done well remotely. But hopefully broadband access is not a major issue. What is the state of US networks? How many people lack the ability to do remote work and remote homework?

The FCC and Pew research keep pretty good track of broadband buildout and adoption. There are many bright spots but some areas of concern as well.

Who lacks service?

The top question: How many people want broadband but lack adequate service or have no service?

The good news is that around 94% of Americans have access to 25 Mbps landline broadband. (Millions more have access if you include broadband from cellular and WISP providers.) It’s not much consolation to rural customers and remote workers who have limited or no options, but these are good numbers.

According to Pew’s 2019 report, about 2% of Americans cite inadequate or no options as the main reason they don’t have broadband. What is concerning is that this 2% number hasn’t budged in years. In 2015, about the same number of Americans cited inadequate or no options as the main reason they didn’t have home broadband. This resembles what I’ve called “the 2% problem“–about 2% of the most rural American households are extremely costly to serve with landline broadband. Satellite, cellular, or WISP service will likely be the best option.

Mobile broadband trends

Mobile broadband is increasingly an option for home broadband. About 24% of Americans with home Internet are mobile only, according to Pew, up from ~16% in 2015.

The ubiquity of high-speed mobile broadband has been the big story in recent years. Per FCC data, from 2009 to 2017 (the most recent year we have data), the average number of new mobile connections increased about 30 million annually. In Dec. 2017 (the most recent data), there were about 313 million mobile subscriptions.

Coverage is very good in the US. OpenSignal uses crowdsourced data and software to determine how frequently users’ phones have a 4G LTE network available (a proxy for coverage and network quality) around the world. The US ranked fourth the world (86%) in 2017, beating out every European country, save Norway.

There was also a big improvement was in mobile speeds. In 2009, a 3G world, almost all connections were below 3 Mbps. In 2017, a world of 4G LTE, almost all connections were above 3 Mbps.

Landline broadband trends

Landline broadband also increased significantly. From 2009 to 2017, there were about 3.5 million new connections per year, about 108 million connections in 2017. In Dec. 2009, about half of landline connections were below 3 Mbps.

There were some notable jumps in high-speed and rural broadband deployment. There was a big jump in fiber-to-the-premises (FTTP) connections, like FiOS and Google Fiber. From 2012 to 2017, the number of FTTP connections more than doubled, to 12.6 million. Relatedly, sub-25 Mbps connections have been falling rapidly while 100 Mbps+ connections have been shooting up. In 2017, there were more connections with 100 Mbps+ (39 million) than there were connections below 25 Mbps (29 million).

In the most recent 5 years for which we have data, the number of rural subscribers (not households) with 25 Mbps increased 18 million (from 29 million to 47 million).

More Work

We only have good data for the first year of the Trump FCC, so it’s hard to evaluate but signs are promising. One of Chairman Pai’s first actions was creating an advisory committee to advise the FCC on broadband deployment (I’m a member). Anecdotally, it’s been fruitful to regularly have industry, academics, advocates, and local officials in the same room to discuss consensus policies. The FCC has acted on many of those.

The rollback of common carrier regulations for the Internet, the pro-5G deployment initiatives, and limiting unreasonable local fees for cellular equipment have all helped increase deployment and service quality.

An effective communications regulator largely stays of the way and removes hindrances to private sector investment. But the FCC does manage some broadband subsidy programs. The Trump FCC has made some improvements to the $4.5 billion annual rural broadband programs. The 17 or so rural broadband subprograms have metastasized over the years, making for a kludgey and expensive subsidy system.

The recent RDOF reforms are a big improvement since they fund a reverse auction program to shift money away from the wasteful legacy subsidy programs. Increasingly, rural households get broadband from WISP, satellite, and rural cable companies–the RDOF reforms recognize that reality.

Hopefully one day reforms will go even further and fund broadband vouchers. It’s been longstanding FCC policy to fund rural broadband providers (typically phone companies serving rural areas) rather than subsidizing rural households. The FCC should consider a voucher model for rural broadband, $5 or $10 or $40 per household per month, depending on the geography. Essentially the FCC should do for rural households what the FCC does for low-income households–provide a monthly subsidy to make broadband costs more affordable.

Many of these good deployment trends began in the Obama years but the Trump FCC has made it a national priority to improve broadband deployment and services. It appears to be be working. With the coronavirus and a huge increase in remote work, US networks will be put to a unique test.

I was pleased to see the American Psychological Association’s new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. As Kyle Orland reports in Ars Technica, the APA has clarified its earlier statement on this relationship between watching video game depictions of violence and actual youth behavior. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.”  But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA says: 

The following resolution should not be misinterpreted or misused by attributing violence, such as mass shootings, to violent video game use. Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.

This is a welcome change of course because the APA’s earlier statements were being used by politicians and media activists who favored censorship of video games. Hopefully that will no longer happen.

“Monkey see, monkey do” theories of media exposure leading to acts of real-world violence have long been among the most outrageously flawed theories in the fields of psychology and media studies.  All the evidence points the opposite way, as I documented a decade ago in a variety of studies. (For a summary, see my 2010 essay, “More on Monkey See-Monkey Do Theories about Media Violence & Real-World Crime.”)

In fact, there might even be something to the “cathartic effect hypothesis,” or the idea first articulated by Aristotle (“katharsis”) that watching dramatic portrayals of violence could lead to “the proper purgation of these emotions.” (See my 2010 essay on this, “Video Games, Media Violence & the Cathartic Effect Hypothesis.”)

Of course, this doesn’t mean that endless exposure to video game or TV and movie violence is a good thing. Prudence and good parenting are still essential. Some limits are smart. But the idea that a kid playing or watching violent act will automatically become violent themselves was always nonsense. It’s time we put that theory to rest. Thanks to the new APA statement, we are one step closer.

P.S. I recently penned an essay about my long love affair with video games that you might find entertaining: “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics

Michael Kotrous and I submitted a comment to the FAA about their Remote ID proposals. While we agree with the need for a “digital license plate” for drones, we’re skeptical that requiring an Internet connection is necessary and that an interoperable, national drone traffic management system will work well.

The FAA deserves credit for rigorously estimating the costs of their requirements, which they set at around $450 million to $600 million over 10 years. These costs largely fall on drone operators and on drone manufacturers for network (say, LTE) subscriptions and equipment.

The FAA’s proposed requirements aren’t completely hashed out, but we raised two points of caution.

One, many many drone flights won’t stray from a pre-programmed route or leave private property. For instance, roof inspections, medical supply deliveries across a hospital campus, train track inspections, and crop spraying via drone all remain on private property. They all pose a de minimis safety concern to manned aircraft and requiring networking equipment and subscriptions seems excessive.

Two, we’re not keen on the FAA and NASA plans for an interoperable, national drone traffic management system. A simple wireless broadcast from a drone should be enough in most circumstances. The FAA proposal would require drone operators to contract with UAS Service Suppliers (USSs) who would be contractors of the FAA. Technical standards would come later. This convoluted system of making virtually all drone operations known to the FAA is likely run aground with technical complexity, technical stagnation, FAA-blessed oligopoly in USS or all of the above.

The FAA instead should consider allowing states, cities, and landowners to make rules for drone operations when operations are solely on their property. States are ready to step in. The North Dakota legislature, for instance, authorized $28 million a few months ago for a statewide drone management system. Other states will follow suit and a federated, geographically-separated drone management system could develop, if the FAA allows. That would reduce the need for complex, interoperable USS and national drone traffic management systems.

Further reading:

Refine the FAA’s Remote ID Rules to Ensure Aviation Safety and Public Confidence, comment to the FAA (March 2020), https://www.mercatus.org/publications/technology-and-innovation/refine-faa%E2%80%99s-remote-id-rules-ensure-aviation-safety-and

Auctioning Airspace, North Carolina Journal of Law & Technology (October 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3284704

Last week I attended the Section 230 cage match workshop at the DOJ. It was a packed house, likely because AG Bill Barr gave opening remarks. It was fortuitous timing for me: my article with Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation, was published 24 hours before the workshop by the Oklahoma Law Review.

These were my impressions of the event:

I thought it was pretty well balanced event and surprisingly civil for such a contentious topic. There were strong Section 230 defenders and strong Section 230 critics, and several who fell in between. There were a couple cheers after a few pointed statements from panelists, but the audience didn’t seem to fall on one side or the other. I’ll add that my friend and co-blogger Neil Chilson gave an impressive presentation about how Section 230 helped make the “long tail” of beneficial Internet-based communities possible.

AG Bob Barr gave the opening remarks, which are available online. A few things jumped out. He suggested that Section 230 had its place but Internet companies are not an infant industry anymore. In his view, the courts have expanded Section 230 beyond drafters’ intent, and the Reno decision “unbalanced” the protections, which were intended to protect minors. The gist of his statement was that the law needs to be “recalibrated.”

Each of these points were disputed by one or more panelists, but the message to the Internet industry was clear: the USDOJ is scrutinizing industry concentration and its relationship to illegal and antisocial online content.

The workshop signals that there is now a large, bipartisan coalition that would like to see Section 230 “recalibrated.” The problem for this coalition is that they don’t agree on what types of content providers should be liable for and they are often at cross-purposes. The problematic content ranges from sex trafficking, to stalkers, to opiate trafficking, to revenge porn, to unfair political ads. For conservatives, social media companies take down too much content, intentionally helping progressives. For progressives, social media companies leave up too much content, unwittingly helping conservatives.

I’ve yet to hear a convincing way to modify Section 230 that (a) satisfies this shaky coalition, (b) would be practical to comply with, and (c) would be constitutional.

Now, Section 230 critics are right: the law blurs the line between publisher and conduit. But this is not unique to Internet companies. The fact is, courts (and federal agencies) blurred the publisher-conduit dichotomy for fifty years for mass media distributors and common carriers as technology and social norms changed. Some cases that illustrate the phenomenon:

In Auvil v. CBS 60 Minutes, a 1991 federal district court decision, some Washington apple growers sued some local CBS affiliates for airing allegedly defamatory programming. The federal district court dismissed the case on the grounds that the affiliates are conduits of CBS programming. Critically, the court recognized that the CBS affiliates “had the power to” exercise editorial control over the broadcast and “in fact occasionally [did] censor programming . . . for one reason or another.” Still, case dismissed. The principle has been cited by other courts. Publishers can be conduits.

Conduits can also be publishers. In 1989, Congress passed a law requiring phone providers to restrict “dial-a-porn” services to minors. Dial-a-porn companies sued. In Information Providers Coalition v. FCC, the 9th Circuit Court of Appeals held that regulated common carriers are “free under the Constitution to terminate service” to providers of indecent content. The Court relied on its decision a few years earlier in Carlin Communications noting that when a common carrier phone company is connecting thousands of subscribers simultaneously to the same content, the “phone company resembles less a common carrier than it does a small radio station.”

Many Section 230 reformers believe Section 230 mangled the common law would like to see the restoration of the publisher-conduit dichotomy. As our research shows, that dichotomy had already been blurred for decades. Until advocates and lawmakers acknowledge these legal trends and plan accordingly, the reformers risk throwing out the baby with the bathwater.

Relevant research:
Brent Skorup & Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation (Oklahoma Law Review).

Brent Skorup & Joe Kane, The FCC and Quasi–Common Carriage: A Case Study of Agency Survival (Minnesota Journal of Law, Science & Technology).

On the latest Institute for Energy Research podcast, I joined Paige Lambermont to discuss:

  • the precautionary principle vs. permissionless innovation;
  • risk analysis trade-offs;
  • the future of nuclear power;
  • the “pacing problem”;
  • regulatory capture;
  • evasive entrepreneurialism;
  • “soft law”;
  • … and why I’m still bitter about losing the 6th grade science fair!

Our discussion was inspired by my recent essay, “How Many Lives Are Lost Due to the Precautionary Principle?”

The race for artificial intelligence (AI) supremacy is on with governments across the globe looking to take the lead in the next great technological revolution. As they did before during the internet era, the US and Europe are once again squaring off with competing policy frameworks.

In early January, the Trump Administration announced a new light-touch regulatory framework and then followed up with a proposed doubling of federal R&D spending on AI and quantum computing. This week, the European Union Commission issued a major policy framework for AI technologies and billed it as “a European approach to excellence and trust.”

It seems the EU basically wants to have its cake and eat it too by marrying up an ambitious industrial policy with a precautionary regulatory regime. We’ve seen this show before. Europe is doubling down on the same policy regime it used for the internet and digital commerce. It did not work out well for the continent then, and there are reasons to think it will backfire on them again for AI technologies. Continue reading →