Miscellaneous

I saw a Bloomberg News report that officials in Austria and Italy are seeking (aggregated, anonymized) users’ location data from cellphone companies to see if local and national lockdowns are effective.

It’s an interesting idea that raises some possibilities for US officials and tech companies to consider to combat the crisis in the US. Caveat: these are very preliminary thoughts.

Cellphone location data from a phone company is OK but imprecise about your movements. It can show where you are typically in a mile or half-mile area. 

But smartphone app location is much more precise since it uses GPS, not cell towers to show movements. Apps with location services can show people’s movements within meters, not half-mile, like cell towers.I suspect 90%+ of smartphone users have GPS location services on (Google Maps, Facebook, Yelp, etc.). App companies have rich datasets of daily movements of people.

Step 1 – App companies isolate and share location trends with health officials

This would need to be aggregated and anonymized of course. Tech companies with health officials should, as Balaji Srinivasan says, identify red and green zones. The point is not to identify individuals but make generalizations about whether a neighborhood or town is practicing good distancing practices.

Step 2 – In green zones, where infection/hospitalization are low and app data says people are strictly distancing, COVID-19 tests.

If people are spending 22 hours not moving except for brief visits to the grocery store and parks, that’s a good neighborhood. We need tests distributed daily in non-infected areas, perhaps at grocery stores and via USPS and Amazon deliveries. As soon as the tests production ramps up, tests need to flood into the areas that are healthy. This achieves two things:

  • Asymptomatic people who might spread can stay home.
  • Non-infected people can start returning to work and a life of semi-normalcy of movement with confidence that others who are out are non-contagious.

Step 3 – In red zones, where infection/hospitalization is high and people aren’t strictly distancing, public education and restrictions.

At least in Virginia, there is county-level data about where the hotspots are. I expect other states know the counties and neighborhoods that are hit hard. Where there’s overlap of these areas not distancing, step up distancing and restrictions.

That still leaves open what to do about yellow zones that are adjacent to red zones, but the main priority should be to identify the green and red. The longer health officials and the public are flying blind with no end in sight, people get frustrated, lose jobs, shutter businesses, and violate distancing rules.

To help slow the spread of the coronavirus, the GMU campus is moving to remote instruction and Mercatus is moving to remote work for employees until the risk subsides. GMU and Mercatus employees join thousands of other universities and businesses this week. Millions of people will be working from home and it will be a major test of American broadband and cellular networks. 

There will likely be a loss of productivity nationwide–some things just can’t be done well remotely. But hopefully broadband access is not a major issue. What is the state of US networks? How many people lack the ability to do remote work and remote homework?

The FCC and Pew research keep pretty good track of broadband buildout and adoption. There are many bright spots but some areas of concern as well.

Who lacks service?

The top question: How many people want broadband but lack adequate service or have no service?

The good news is that around 94% of Americans have access to 25 Mbps landline broadband. (Millions more have access if you include broadband from cellular and WISP providers.) It’s not much consolation to rural customers and remote workers who have limited or no options, but these are good numbers.

According to Pew’s 2019 report, about 2% of Americans cite inadequate or no options as the main reason they don’t have broadband. What is concerning is that this 2% number hasn’t budged in years. In 2015, about the same number of Americans cited inadequate or no options as the main reason they didn’t have home broadband. This resembles what I’ve called “the 2% problem“–about 2% of the most rural American households are extremely costly to serve with landline broadband. Satellite, cellular, or WISP service will likely be the best option.

Mobile broadband trends

Mobile broadband is increasingly an option for home broadband. About 24% of Americans with home Internet are mobile only, according to Pew, up from ~16% in 2015.

The ubiquity of high-speed mobile broadband has been the big story in recent years. Per FCC data, from 2009 to 2017 (the most recent year we have data), the average number of new mobile connections increased about 30 million annually. In Dec. 2017 (the most recent data), there were about 313 million mobile subscriptions.

Coverage is very good in the US. OpenSignal uses crowdsourced data and software to determine how frequently users’ phones have a 4G LTE network available (a proxy for coverage and network quality) around the world. The US ranked fourth the world (86%) in 2017, beating out every European country, save Norway.

There was also a big improvement was in mobile speeds. In 2009, a 3G world, almost all connections were below 3 Mbps. In 2017, a world of 4G LTE, almost all connections were above 3 Mbps.

Landline broadband trends

Landline broadband also increased significantly. From 2009 to 2017, there were about 3.5 million new connections per year, about 108 million connections in 2017. In Dec. 2009, about half of landline connections were below 3 Mbps.

There were some notable jumps in high-speed and rural broadband deployment. There was a big jump in fiber-to-the-premises (FTTP) connections, like FiOS and Google Fiber. From 2012 to 2017, the number of FTTP connections more than doubled, to 12.6 million. Relatedly, sub-25 Mbps connections have been falling rapidly while 100 Mbps+ connections have been shooting up. In 2017, there were more connections with 100 Mbps+ (39 million) than there were connections below 25 Mbps (29 million).

In the most recent 5 years for which we have data, the number of rural subscribers (not households) with 25 Mbps increased 18 million (from 29 million to 47 million).

More Work

We only have good data for the first year of the Trump FCC, so it’s hard to evaluate but signs are promising. One of Chairman Pai’s first actions was creating an advisory committee to advise the FCC on broadband deployment (I’m a member). Anecdotally, it’s been fruitful to regularly have industry, academics, advocates, and local officials in the same room to discuss consensus policies. The FCC has acted on many of those.

The rollback of common carrier regulations for the Internet, the pro-5G deployment initiatives, and limiting unreasonable local fees for cellular equipment have all helped increase deployment and service quality.

An effective communications regulator largely stays of the way and removes hindrances to private sector investment. But the FCC does manage some broadband subsidy programs. The Trump FCC has made some improvements to the $4.5 billion annual rural broadband programs. The 17 or so rural broadband subprograms have metastasized over the years, making for a kludgey and expensive subsidy system.

The recent RDOF reforms are a big improvement since they fund a reverse auction program to shift money away from the wasteful legacy subsidy programs. Increasingly, rural households get broadband from WISP, satellite, and rural cable companies–the RDOF reforms recognize that reality.

Hopefully one day reforms will go even further and fund broadband vouchers. It’s been longstanding FCC policy to fund rural broadband providers (typically phone companies serving rural areas) rather than subsidizing rural households. The FCC should consider a voucher model for rural broadband, $5 or $10 or $40 per household per month, depending on the geography. Essentially the FCC should do for rural households what the FCC does for low-income households–provide a monthly subsidy to make broadband costs more affordable.

Many of these good deployment trends began in the Obama years but the Trump FCC has made it a national priority to improve broadband deployment and services. It appears to be be working. With the coronavirus and a huge increase in remote work, US networks will be put to a unique test.

Michael Kotrous and I submitted a comment to the FAA about their Remote ID proposals. While we agree with the need for a “digital license plate” for drones, we’re skeptical that requiring an Internet connection is necessary and that an interoperable, national drone traffic management system will work well.

The FAA deserves credit for rigorously estimating the costs of their requirements, which they set at around $450 million to $600 million over 10 years. These costs largely fall on drone operators and on drone manufacturers for network (say, LTE) subscriptions and equipment.

The FAA’s proposed requirements aren’t completely hashed out, but we raised two points of caution.

One, many many drone flights won’t stray from a pre-programmed route or leave private property. For instance, roof inspections, medical supply deliveries across a hospital campus, train track inspections, and crop spraying via drone all remain on private property. They all pose a de minimis safety concern to manned aircraft and requiring networking equipment and subscriptions seems excessive.

Two, we’re not keen on the FAA and NASA plans for an interoperable, national drone traffic management system. A simple wireless broadcast from a drone should be enough in most circumstances. The FAA proposal would require drone operators to contract with UAS Service Suppliers (USSs) who would be contractors of the FAA. Technical standards would come later. This convoluted system of making virtually all drone operations known to the FAA is likely run aground with technical complexity, technical stagnation, FAA-blessed oligopoly in USS or all of the above.

The FAA instead should consider allowing states, cities, and landowners to make rules for drone operations when operations are solely on their property. States are ready to step in. The North Dakota legislature, for instance, authorized $28 million a few months ago for a statewide drone management system. Other states will follow suit and a federated, geographically-separated drone management system could develop, if the FAA allows. That would reduce the need for complex, interoperable USS and national drone traffic management systems.

Further reading:

Refine the FAA’s Remote ID Rules to Ensure Aviation Safety and Public Confidence, comment to the FAA (March 2020), https://www.mercatus.org/publications/technology-and-innovation/refine-faa%E2%80%99s-remote-id-rules-ensure-aviation-safety-and

Auctioning Airspace, North Carolina Journal of Law & Technology (October 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3284704

Below is a link to my submission for tomorrow’s Department of Justice workshop, “Section 230 – Nurturing Innovation or Fostering Unaccountability?“. I will be on panel three, “Imagining the Alternative.” From my opening:

Section 230 of the Communications Decency Act is a crucial part of the U.S.’s regulatory environment. The principles of individual responsibility embodied in Section 230 freed U.S. entrepreneurs to become the world’s best at developing innovative user-to-user platforms. Some people, including some people in industries disrupted by this innovation, are now calling to change Section 230. But there is little evidence that changing Section 230 would improve competition or innovation to the benefit of consumers. And there are good reasons to believe that increasing liability would hinder future competition and innovation and could ultimately harm consumers on balance. Thus, any proposed changes to Section 230 must be evaluated against seven important principles to ensure that the U.S. maintains a regulatory environment best suited to generate widespread human prosperity.

ImageCongress has become a less important player in the field of technology policy. Why did that happen, and what are the ramifications for technological governance efforts going forward?

I’ve spent almost 30 years covering technology policy. There was a time in my life when I spent almost all my time as a policy analyst preoccupied with developments in the federal legislative arena. I lived in the trenches of Capitol Hill and interacted with lawmakers and their staff morning, noon, and night.

In recent years, however, I have spent very little time focused on the Legislative Branch because it has effectively become a non-actor on technology policy. It is not that congressional lawmakers stopped caring about tech policy. Interest actually remains quite high—perhaps higher than ever before. Congress also continues to introduce lots of bills, host plenty of hearings, and issue mountains of press releases related to tech policy issues.

Nonetheless, all that interest and activity has not really translated into much important legislation. Continue reading →

Coauthored with Mercatus MA Fellow Jessie McBirney

Flat standardized test scores, low college completion rates, and rising student debt has led many to question the bachelor’s degree as the universal ticket to the middle class. Now, bureaucrats are turning to the job market for new ideas. The result is a renewed enthusiasm for Career and Technical Education (CTE), which aims to “prepare students for success in the workforce.” Every high school student stands to benefit from a fun, rigorous, skills-based class, but the latest reauthorization of the Carl D. Perkins Act, which governs CTE at the federal level, betrays a faulty economic theory behind the initiative.

Modern CTE is more than a rebranding of yesterday’s vocational programs, which earned a reputation as “dumping grounds” for struggling students and, unfortunately, minorities. Today, CTE classes aim to be academically rigorous and cover career pathways ranging from manufacturing to Information Technology and STEM (science, technology, engineering, and mathematics). Most high school CTE occurs at traditional public schools, where students take a few career-specific classes alongside their core requirements.

Continue reading →

This week, the Trump Administration proposed a new policy framework for artificial intelligence (AI) technologies that attempts to balance the need for continued innovation with a set of principles to address concerns about new AI services and applications. This represents an important moment in the history of emerging technology governance as it creates a policy vision for AI that is generally consistent with earlier innovation governance frameworks established by previous administrations.

Generally speaking, the Trump governance vision for AI encourages regulatory humility and patience in the face of an uncertain technological future. However, the framework also endorses a combination of “hard” and “soft” law mechanisms to address policy concerns that have already been raised about developing or predicted AI innovations.

AI promises to revolutionize almost every sector of the economy and can potentially benefit our lives in numerous ways. But AI applications also raise a number of policy concerns, specifically regarding safety or fairness. On the safety front, for example, some are concerned about the AI systems that control drones, driverless cars, robots, and other autonomous systems. When it comes to fairness considerations, critics worry about “bias” in algorithmic systems that could deny people jobs, loans, or health care, among other things.

These concerns deserve serious consideration and some level of policy guidance or else the public may never come to trust AI systems, especially if the worst of those fears materialize as AI technologies spread. But how policy is formulated and imposed matters profoundly. A heavy-handed, top-down regulatory regime could undermine AI’s potential to improve lives and strengthen the economy. Accordingly, a flexible governance framework is needed and the administration’s new guidelines for AI regulation do a reasonably good job striking that balance. Continue reading →

Technopanics, Progress Studies, AI, spectrum, and privacy were hot topics at the Technology Liberation Front in the past year. Below are the most popular posts from 2019.

Glancing at our site metrics over the past 10 years, the biggest topics in the 2010s were technopanics, Bitcoin, net neutrality, the sharing economy, and broadband policy. Looking forward at the 2020s, I’ll hazard some predictions about what will be significant debates at the TLF: technopanics and antitrust, AVs, drones, and the future of work. I suspect that technology and federalism will be long-running issues in the next decade, particularly for drones, privacy, AVs, antitrust, and healthcare tech.

Enjoy 2019’s top 10, and Happy New Year.

10. 50 Years of Video Games & Moral Panics by Adam Thierer

I have a confession: I’m 50 years old and still completely in love with video games.

As a child of the 1970s, I straddled the divide between the old and new worlds of gaming. I was (and remain) obsessed with board and card games, which my family played avidly. But then Atari’s home version of “Pong” landed in 1976. The console had rudimentary graphics and controls, and just one game to play, but it was a revelation. After my uncle bought Pong for my cousins, our families and neighbors would gather round his tiny 20-inch television to watch two electronic paddles and a little dot move around the screen.

9. The Limits of AI in Predicting Human Action by Anne Hobson and Walter Stover

Let’s assume for a second that AIs could possess not only all relevant information about an individual, but also that individual’s knowledge. Even if companies somehow could gather this knowledge, it would only be a snapshot at a moment in time. Infinite converging factors can affect one’s next decision to not purchase a soda, even if your past purchase history suggests you will. Maybe you went to the store that day with a stomach ache. Maybe your doctor just warned you about the perils of high fructose corn syrup so you forgo your purchase. Maybe an AI-driven price raise causes you to react by finding an alternative seller.

In other words, when you interact with the market—for instance, going to the store to buy groceries—you are participating in a discovery process about your own preferences or willingness to pay.

8. Free-market spectrum policy and the C Band by Brent Skorup

A few years ago I would have definitely favored speed and the secondary market plan. I still lean towards that approach but I’m a little more on the fence after reading Richard Epstein’s work and others’ about the “public trust doctrine.” This is a traditional governance principle that requires public actors to receive fair value when disposing of public property. It prevents public institutions from giving discounted public property to friends and cronies. Clearly, cronyism isn’t the case here and FCC can’t undo what FCCs did generations ago in giving away spectrum. I think the need for speedy deployment trumps the windfall issue here, but it’s a closer call for me than in the past.

One proposal that hasn’t been contemplated with the C Band but might have merit is an overlay auction with a deadline. With such an auction, the FCC gives incumbent users a deadline to vacate a band (say, 5 years). The FCC then auctions flexible-use licenses in the band. The FCC receives the auction revenues and the winning bidders are allowed to deploy services immediately in the “white spaces” unoccupied by the incumbents. The winning bidders are allowed to pay the incumbents to move out before the deadline.

7. STELAR Expiration Warranted by Hance Haney

The retransmission fees were purposely set low to help the emerging satellite carriers get established in the marketplace when innovation in satellite technology still had a long way to go. Today the carriers are thriving business enterprises, and there is no need for them to continue receiving subsidies. Broadcasters, on the other hand, face unprecedented competition for advertising revenue that historically covered the entire cost of content production.

Today a broadcaster receives 28 cents per subscriber per month when a satellite carrier retransmits their local television signal. But the fair market value of that signal is actually $2.50, according to one estimate.

6. What is Progress Studies? by Adam Thierer

How do we shift cultural and political attitudes about innovation and progress in a more positive direction? Collison and Cowen explicitly state that the goal of Progress Studies transcends “mere comprehension” in that it should also look to “identify effective progress-increasing interventions and the extent to which they are adopted by universities, funding agencies, philanthropists, entrepreneurs, policy makers, and other institutions.”

But fostering social and political attitudes conducive to innovation is really more art than science. Specifically, it is the art of persuasion. Science can help us amass the facts proving the importance of innovation and progress to human improvement. Communicating those facts and ensuring that they infuse culture, institutions, and public policy is more challenging.

5. How Do You Value Data? A Reply To Jaron Lanier’s Op-Ed In The NYT by Will Rinehart

All of this is to say that there is no one single way to estimate the value of data.

As for the Lanier piece, here are some other things to consider:

A market for data already exists. It just doesn’t include a set of participants that Jaron wants to include, which are platform users.    

Will users want to be data entrepreneurs, looking for the best value for their data? Probably not. At best, they will hire an intermediary to do this, which is basically the job of the platforms already.

An underlying assumption is that the value of data is greater than the value advertisers are willing to pay for a slice of your attention. I’m not sure I agree with that.

Finally, how exactly do you write these kinds of laws?

4. Explaining the California Privacy Rights and Enforcement Act of 2020 by Ian Adams

As released, the initiative is equal parts privacy extremism and cynical-politics. Substantively, some will find elements to applaud in the CPREA, between prohibitions on the use of behavioral advertising and reputational risk assessment (all of which are deserving of their own critiques), but the operational structure of the CPREA is nothing short of disastrous. Here are some of the worst bits:

3. Best Practices for Public Policy Analysts by Adam Thierer

So, for whatever it’s worth, here are a few ideas about how to improve your content and your own brand as a public policy analyst. The first list is just some general tips I’ve learned from others after 25 years in the world of public policy. Following that, I have also included a separate set of notes I use for presentations focused specifically on how to prepare effective editorials and legislative testimony. There are many common recommendations on both lists, but I thought I would just post them both here together.

2. An Epic Moral Panic Over Social Media by Adam Thierer

Strangely, many elites, politicians, and parents forget that they, too, were once kids and that their generation was probably also considered hopelessly lost in the “vast wasteland” of whatever the popular technology or content of the day was. The Pessimists Archive podcast has documented dozens of examples of this reoccurring phenomenon. Each generation makes it through the panic du jour, only to turn around and start lambasting newer media or technologies that they worry might be rotting their kids to the core. While these panics come and go, the real danger is that they sometimes result in concrete policy actions that censor content or eliminate choices that the public enjoys. Such regulatory actions can also discourage the emergence of new choices.

1. How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality by Adam Thierer

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however.

[Cross-posted to Medium.]

The spread of “sanctuary cities”—local governments that resist federal laws or regulations in some fashion, and typically for strongly-held moral reasons—is one of the most interesting and controversial governance developments of recent decades. Unfortunately, the concept receives only a selective defense from people when it fits their narrow political objectives, such as sanctuary movements for immigration and gun rights.

But there is broader case to be made for sanctuaries in many different contexts as a way to encourage experiments in alternative governance models and just let people live lives of their choosing. The concept faces many challenges in practice, however, and I remain skeptical that sanctuary cities will ever scale up and become a widespread governance phenomenon. There’s just too much for federal officials to lose and they likely will crush any particular sanctuary movement that gains serious steam.

Sanctuary Cities as Political Civil Disobedience

First, let’s think about what local officials are really doing when they declare themselves a sanctuary. (Because they can be formed by city, county, or state governments, I will just use “sanctuaries” as a shorthand throughout this essay.)

Academics use the term “rule departure” when referencing “deliberate failures, often for conscientious reasons, to discharge the duties of one’s office.” [Joel Feinberg, “Civil Disobedience in the Modern World,” in Humanities in Society, Vol. 2, No. 1, 1979, p 37.] In this sense, sanctuary cities could be viewed as a type of collective civil disobedience by public officials because these governance arrangements are typically defended on moral grounds and represent an active form of resistance to policies imposed by higher-ups. Continue reading →

After coming across some reviews of Thomas Philippon’s book, The Great Reversal: How America Gave Up on Free Markets, I decided to get my hands on a copy. Most of the reviews and coverage mention the increasing monopoly power of US telecom companies and rising prices relative to European companies. In fact, Philippon tells readers in the intro of the book that the question that spurred him to write Great Reversal is “Why on earth are US cell phone plans so expensive?”

As someone who follows the US mobile market closely, I was a little disappointed that the analysis of the telecom sectors is rather slim. There’s only a handful of pages (out of 340) of Europe-US telecom comparison, featuring one story about French intervention and one chart. This isn’t a criticism of the book–Philippon doesn’t pitch it as a telecom policy book. However, the telecom section in the book isn’t the clear policy success story it’s described as.

The general narrative in the book is that US lawmakers are entranced by the laissez-faire Chicago school of antitrust and placated by dark money campaigns. The result, as Philippon puts it, is that “Creeping monopoly power has slowly but surely suffocated the [US] middle class” and today Europe has freer markets than the US. That may be, but the telecom sectors don’t provide much support for that idea.

Low Prices in European Telecom . . .

Philippon says that “The telecommunications industry provides another example of successful competition policy in Europe.”

He continues:

The case of France provides a striking example of competition. Free Mobile . . . obtained its 4G license [with regulator assistance] in 2011 and became a significant competitor for the three large incumbents. The impact was immediate. . . . In about six months after the entry of Free Mobile, the price paid by French consumers had dropped by about 40 percent. Wireless services in France had been more expensive in the US, but now they are much cheaper.

It’s true, mobile prices are generally lower in Europe. Monthly average revenue per user (ARPU) in the US, for instance, is about double the ARPU in the UK (~$42 v. ~$20 in 2016). And, as Philippon points out, cellular prices are lower in France as well.

One issue with this competition “success story”: the US also has four mobile carriers, and had four mobile carriers even prior to 2011. Since the number of competitors is the same in France and the US, competition doesn’t really explain why there’s a price difference between France and the US. (India, for instance, has fewer providers than the US and France–and much lower cellular prices, so number of competitors isn’t a great predictor of pricing.)

. . . and Low Investment

If “lower telecom prices than the US” is the standard, then yes, European competition policy has succeeded. But if consumers and regulators prioritize other things, like industry investment, network quality (fast speeds), and rural coverage, the story is much more mixed. (Bret Swanson at AEI points to other issues with Philippon’s analysis.) Philippon’s singular focus on telecom prices and number of competitors distracts from these other important competition and policy dimensions.

According to OECD data, for instance, in 2015 the US exceeded the OECD average for spending on IT and communications equipment as a percent of GDP. France might have lower cell phone bills, but US telecom companies spend 275% more than French telecom companies on this measure (1.1% of GDP v. 0.4% of GDP) .

Further, telecom investment per capita in the US was much higher than its European counterparts. US telecom companies spent about 55 percent more per capita than French telecoms spent ($272 v. $175), according to the same OECD reports. And France is one of the better European performers. Many European carriers spend, on a per capita basis, less than half what US carriers spend. US carriers spend 130% more than UK telecoms spend and 145% more than German telecoms.

This investment deficit in Europe has real-world effects on consumers. OpenSignal uses crowdsourced data and software to determine how frequently users phones have a 4G LTE network available (a proxy for coverage and network quality) around the world. The US ranked fourth the world (86%) in 2017, beating out every European country, save Norway. In contrast, France and Germany ranked 60th and 61st, respectively, for this network quality measure, beat out by less wealthy nations like Kazakhstan, Cambodia, and Romania. 

The European telecom regulations and anti-merger policies created a fragmented market and financially strapped companies. As a result, investors are fleeing European telecom firms. According to the Financial Times and Bloomberg data, between 2012 and 2018, the value of Europe’s telecom companies fell almost 50%. The value of the US sector rose by 70% and the Asian sector rose by 13% in that time period.  

Price Wars or 5G Investment?

Philippon is right that Europe has chosen a different path than the US when it comes to telecom services. Whether they’ve chosen a pro-consumer path depends on where you sit (and live). Understandably, academics and advocates living in places like Boston, New York and DC look fondly at Berlin and Paris broadband prices. Network quality outside of the cities and suburbs rarely enters the picture in these policy discussions, and Philippon’s book is no exception. US lawmakers and telecom companies have prioritized non-price dimensions: network quality, investment in 5G, and rural coverage.

If anything, European regulators seem to be retreating somewhat from the current path of creating competitors and regulating prices. As the Financial Times wrote last year, the trend in Europe telecom is consolidation. The French regulator ARCEP reversed course last year signaled a new openness to telecom consolidation.

Still, there are significant obstacles to consolidation in European markets, and it seems likely they’ll fall further behind the US and China in rural network coverage and 5G investment. European telecom companies are in a bit of panic about this, which they expressed in a letter to the European Commission this month, urging reform.

In short, European telecom competition policy is not the unqualified success depicted in Great Reversal. To his credit, Philippon in the book intro emphasizes humility about prognostications and the limits of experts’ knowledge:

I readily admit I don’t have all the answers. …I would suggest . . . that [economists’] prescriptions be taken with a (large) grain of salt. When you read an author or commentator who tells you something obvious, take your time and do the math. Almost every time, you’ll discover that it wasn’t really obvious at all. I have found that people who tell you that the answers to the big questions in economics are obvious are telling you only half of the story.

Couldn’t have put it better myself.

Credit to Connor Haaland for research assistance.