By Brent Skorup and Michael Kotrous

In 1999, the FCC completed one of its last spectrum “beauty contests.” A sizable segment of spectrum was set aside for free for the US Department of Transportation (DOT) and DOT-selected device companies to develop DSRC, a communications standard for wireless automotive communications, like vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). The government’s grand plans for DSRC never materialized and in the intervening 20 years, new tech—like lidar, radar, and cellular systems—advanced and now does most of what regulators planned for DSRC.

Too often, however, government technology plans linger, kept alive by interest groups that rely on the new regulatory privilege, even when the market moves on. At the eleventh hour of the Obama administration, NHTSA proposed mandating DSRC devices in all new vehicles, an unprecedented move that Brent and other free-market groups opposed in public interest comment filings. As Brent wrote last year,

In the fast-moving connected car marketplace, there is no reason to force products with reliability problems [like DSRC] on consumers. Any government-designed technology that is “so good it must be mandated” warrants extreme skepticism….

Further,

Rather than compel automakers to add costly DSRC systems to cars, NHTSA should consider a certification or emblem system for vehicle-to-vehicle safety technologies, similar to its five-star crash safety ratings. Light-touch regulatory treatment would empower consumer choice and allow time for connected car innovations to develop.

Fortunately, the Trump administration put the brakes on the mandate, which would have added cost and complexity to cars for uncertain and unlikely benefits.

However, some regulators and companies are trying to revive the DSRC device industry while NHTSA’s proposed DSRC mandate is on life support. Marc Scribner at CEI uncovered a sneaky attempt to create DSRC technology sales via an EPA proceeding. The stalking horse DSRC boosters have chosen is the Corporate Average Fuel Economy (CAFE) regulations—specifically the EPA’s off-cycle program. EPA and NHTSA jointly manage these regulations. That program rewards manufacturers who adopt new technologies that reduce a vehicle’s emissions in ways not captured by conventional measures like highway fuel economy.

Under the proposed rules, auto makers that install V2V or V2I capabilities can receive credit for having reduced emissions. The EPA proposal doesn’t say “DSRC” but it singles out only one technology standard that would be favored in this scheme: a standard underlying DSRC

This proposal comes as a bit of surprise for those who have followed auto technology; we’re aware of no studies showing DSRC improves emissions. (DSRC’s primary use-case today is collision warnings to the driver.) But the EPA proposes a helpful end-around that problem: simply waiving the requirement that manufacturers provide data showing a reduction in harmful emissions. Instead of requiring emissions data, the EPA proposes a much lower bar, that auto makers show that these devices merely “have some connection to overall environmental benefits.” Unless the agency applies credits in a tech-neutral way and requires more rigor in the final rules, which is highly unlikely, this looks like a backdoor subsidy to DSRC via gaming of emission reduction regulations.

Hopefully EPA regulators will discover the ruse and drop the proposal. It was a pleasant surprise last week when a DOT spokesman committed that the agency favored a tech-neutral approach for this “talking car” band. But after 20 years, this 75 MHz of spectrum gifted to DSRC device makers should be repurposed by the FCC for flexible-use. Fortunately, the FCC has started thinking about alternative uses for the DSRC spectrum. In 2015 Commissioners O’Rielly and Rosenworcel said the agency should consider flexible-use alternatives to this DSRC-only band.

The FCC would be wise to follow through and push even farther. Until the gifted spectrum that powers DSRC is reallocated to flexible use, interest groups will continue to pull any regulatory lever it has to subsidize or mandate adoption of talking-car technology. If DSRC is the best V2V technology available, device makers should win market share by convincing auto companies, not by convincing regulators.

Last month, it was my great honor to be invited to be a keynote speaker at Lincoln Network’s Reboot 2018 “Innovation Under Threat” conference. Zach Graves interviewed me for 30 minutes about a wide range of topics, including: innovation arbitrage, evasive entrepreneurialism, technopanics, the pacing problem, permissionless innovation, technological civil disobedience, existential risk, soft law and more. They’ve now posted the full event video and you can watch it down below.

National Public Radio, the Robert Wood Johnson Foundation, and the Harvard T.H. Chan School of Public Health just published a new report on “Life in Rural America.” This survey of 1,300 adults living in the rural United States has a lot to say about health issues, population change, the strengths and challenges for rural communities, as well as discrimination and drug use. But I wanted to highlight two questions related to rural broadband development that might make you update your beliefs about massive rural investment. Continue reading →

Many are understandably pessimistic about platforms and technology. This year has been a tough one, from Cambridge Analytica and Russian trolls to the implementation of GDPR and data breaches galore.

Those who think about the world, about the problems that we see every day, and about their own place in it, will quickly realize the immense frailty of humankind. Fear and worry makes sense. We are flawed, each one of us. And technology only seems to exacerbate those problems.

But life is getting better. Poverty continues nose-diving; adult literacy is at an all-time high; people around the world are living longer, living in democracies, and are better educated than at any other time in history. Meanwhile, the digital revolution has resulted in a glut of informational abundance, helping to correct the informational asymmetries that have long plagued humankind. The problem we now face is not how to address informational constraints, but how to provide the means for people to sort through and make sense of this abundant trove of data. These macro trends don’t make headlines. Psychologists know that people love to read negative articles. Our brains are wired for pessimism Continue reading →

Last week, I had the honor of being a panelist at the Information Technology and Innovation Foundation’s event on the future of privacy regulation. The debate question was simple enough: Should the US copy the EU’s new privacy law?

When we started planning the event, California’s Consumer Privacy Act (CCPA) wasn’t a done deal. But now that it has passed and presents a deadline of 2020 for implementation, the terms of the privacy conversation have changed. Next year, 2019, Congress will have the opportunity to pass a law that could supersede the CCPA and some are looking to the EU’s General Data Protection Regulation (GDPR) for guidance. Here are some reasons for not taking that path. Continue reading →

In recent months, my colleagues and I at the Mercatus Center at George Mason University have published a flurry of essays about the importance of innovation, entrepreneurialism, and “moonshots,” as well as the future of technological governance more generally. A flood of additional material is coming, but I figured I’d pause for a moment to track our progress so far. Much of this work is leading up to my next on the freedom to innovate, which I am finishing up currently.

Continue reading →

Over at the Mercatus Center Bridge blog, Trace Mitchell and I just posted an essay entitled, “A Non-Partisan Way to Help Workers and Consumers,” which discusses the new Federal Trade Commission’s (FTC) Economic Liberty Task Force report on occupational licensing.

We applaud the FTC’s calls for greater occupational licensing uniformity and portability, but regret the missed opportunity to address root problem of excessive licensing more generally. But while FTC is right to push for greater occupational licensing uniformity and portability, policymakers need to confront the sheer absurdity of licensing so many jobs that pose zero risk to public health & safety. Licensing has become completely detached from risk realities and actual public needs.

As the FTC notes, excessive licensing limits employment opportunities, worker mobility, and competition while also “resulting in higher prices, reduced quality, and less convenience for consumers.” These are unambiguous facts that are widely accepted by experts of all stripes. Both the Obama and Trump Administrations, for example, have been completely in league on the need for comprehensive  licensing reforms. Continue reading →

I’ve always been perplexed by tech critiques that seek to pit “humanist” values against technology or technological processes, or that even suggest a bright demarcation exists between these things. Properly understood, “technology” and technological innovation are simply extensions of our humanity and represent efforts to continuously improve the human condition. In that sense, humanism and technology are compliments, not opposites.

I started thinking about this again after reading a recent article by Christopher Mims of The Wall Street Journal, which introduced me to the term “techno-chauvinism.” Techno-chauvinism is a new term that some social critics are using to identify when technologies or innovators are apparently not behaving in a “humanist” fashion. Mims attributes the term techno-chauvinism to Meredith Broussard of New York University, who defines it as “the idea that technology is always the highest and best solution, and is superior to the people-based solution.” [Italics added.] Later on Twitter, Mims defined and critiqued techno-chauvinism as “the belief that the best solution to any problem is technology, not changing our culture, habits or mindset.”

Everything Old is New Again

There are other terms critics have used to describe the same notion, including: “techno-fundamentalism” (Siva Vaidhyanathan), “cyber-utopianism,” and “technological solutionism” (Evgeny Morozov). In a sense, all these terms are really just variants of what scholars in the field of Science and Technology Studies (STS) have long referred to as “technological determinism.”

As I noted in a recent essay about determinism, the traditional “hard” variant of technological determinism refers to the notion that technology almost has a mind of its own and that it will plow forward without much resistance from society or governments. Critics argue that determinist thinking denies or ignores the importance of the human element in moving history forward, or what Broussard would refer to as “people-based solutions.”

The first problem with this thinking is there are no bright lines in these debates and many “softer” variants of determinism exist. The same problem is at work when we turn to discussions about both “humanism” and “technology.” Things get definitionally murky quite quickly, and everyone seemingly has a preferred conception of these terms to fit their own ideological dispositions. “Humanism is a rather vague and contested term with a convoluted history,” observes tech philosopher Michael Sacasas. And here’s an essay that I have updated many times over the years to catalog the dozens of different definitions of “technology” I have unearthed in my ongoing research. Continue reading →

Over at the Mercatus Center’s Bridge blog, Chad Reese interviewed me about my forthcoming book and continuing research on “evasive entrepreneurialism” and the freedom to innovate. I provide a quick summary of the issues and concepts that I am exploring with my colleagues currently. Those issues include:

  • free innovation
  • evasive entrepreneurialism & social entrepreneurialism
  • technological civil disobedience
  • the freedom to tinker / freedom to try / freedom to innovate
  • the right to earn a living
  • “moonshots” / deep technologies / disruptive innovation / transformative tech
  • innovation culture
  • global innovation arbitrage
  • the pacing problem & the Collingridge dilemma
  • “soft law” solutions for technological governance

You can read the entire Q&A over at The Bridge, or I have pasted it down below.

Continue reading →

Reading professor Siva Vaidhyanathan’s recent op-ed in the New York Times, one could reasonably assume that Facebook is now seriously tackling the enormous problem of dangerous information. In detailing his takeaways from a recent hearing with Facebook’s COO Sheryl Sandberg and Twitter CEO Jack Dorsey, Vaidhyanathan explained,

Ms. Sandberg wants us to see this as success. A number so large must mean Facebook is doing something right. Facebook’s machines are determining patterns of origin and content among these pages and quickly quashing them.

Still, we judge exterminators not by the number of roaches they kill, but by the number that survive. If 3 percent of 2.2 billion active users are fake at any time, that’s still 66 million sources of potentially false or dangerous information.

One thing is clear about this arms race: It is an absurd battle of machine against machine. One set of machines create the fake accounts. Another deletes them. This happens millions of times every month. No group of human beings has the time to create millions, let alone billions, of accounts on Facebook by hand. People have been running computer scripts to automate the registration process. That means Facebook’s machines detect the fakes rather easily. (Facebook says that fewer than 1.5 percent of the fakes were identified by users.)

But it could be that, in their zeal to trapple down criticism from all sides, Facebook instead has corrected too far and is now over-moderating. The fundamental problem is that it is nearly impossible to know the true amount of disinformation on a platform. For one, there is little agreement on what kind of content needs to be policed. It is doubtful everyone would agree what constitutes fake news and separates it from disinformation or propaganda and how all of that differs from hate speech. But more fundamentally, even if everyone agreed to what should be taken down, it is still not clear that algorithmic filtering methods would be able to perfectly approximate that. Continue reading →