Recently, Noah Smith explored an emerging question in tech. Is there a kill zone where new and innovative upstarts are being throttled by the biggest players? He explains,

Facebook commissioned a study by consultant Oliver Wyman that concluded that venture investment in the technology sector wasn’t lower than in other sectors, which led Wyman to conclude that there was no kill zone.

But economist Ian Hathaway noted that looking at the overall technology industry was too broad. Examining three specific industry categories — internet retail, internet software and social/platform software, corresponding to the industries dominated by Amazon, Google and Facebook, respectively — Hathaway found that initial venture-capital financings have declined by much more in the past few years than in comparable industries. That suggests the kill zone is real.

A recent paper by economists Wen Wen and Feng Zhu reaches a similar conclusion. Observing that Google has tended to follow Apple in deciding which mobile-app markets to enter, they assessed whether the threat of potential entry by Google (as measured by Apple’s actions) deters innovation by startups making apps for Google’s Android platform. They conclude that when the threat of the platform owner’s entry is higher, fewer app makers will be interested in offering a product for that particular niche. A 2014 paper by the same authors found similar results for Amazon and third-party merchants using its platform.

So, are American tech companies making it difficult for startups? Perhaps, but there are some other reasons to be skeptical. Continue reading →

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism, O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making. Continue reading →

By Brent Skorup and Trace Mitchell

An important benefit of 5G cellular technology is more bandwidth and more reliable wireless services. This means carriers can offer more niche services, like smart glasses for the blind and remote assistance for autonomous vehicles. A Vox article last week explored an issue familiar to technology experts: will millions of new 5G transmitters and devices increase cancer risk? It’s an important question but, in short, we’re not losing sleep over it.

5G differs from previous generations of cellular technology in that “densification” is important–putting smaller transmitters throughout neighborhoods. This densification process means that cities must regularly approve operators’ plans to upgrade infrastructure and install devices on public rights-of-way. However, some homeowners and activists are resisting 5G deployment because they fear more transmitters will lead to more radiation and cancer. (Under federal law, the FCC has safety requirements for emitters like cell towers and 5G. Therefore, state and local regulators are not allowed to make permitting decisions based on what they or their constituents believe are the effects of wireless emissions.)

We aren’t public health experts; however, we are technology researchers and decided to explore the telecom data to see if there is a relationship. If radio transmissions increase cancer, we should expect to see a correlation between the number of cellular transmitters and cancer rates. Presumably there is a cumulative effect: the more cellular radiation people are exposed to, the higher the cancer rates.

From what we can tell, there is no link between cellular systems and cancer. Despite a huge increase in the number of transmitters in the US since 2000, the nervous system cancer rate hasn’t budged. In the US the number of wireless transmitters have increased massively–300%–in 15 years. (This is on the conservative side–there are tens of millions of WiFi devices that are also transmitting but are not counted here.) Continue reading →

By Brent Skorup and Michael Kotrous

In 1999, the FCC completed one of its last spectrum “beauty contests.” A sizable segment of spectrum was set aside for free for the US Department of Transportation (DOT) and DOT-selected device companies to develop DSRC, a communications standard for wireless automotive communications, like vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). The government’s grand plans for DSRC never materialized and in the intervening 20 years, new tech—like lidar, radar, and cellular systems—advanced and now does most of what regulators planned for DSRC.

Too often, however, government technology plans linger, kept alive by interest groups that rely on the new regulatory privilege, even when the market moves on. At the eleventh hour of the Obama administration, NHTSA proposed mandating DSRC devices in all new vehicles, an unprecedented move that Brent and other free-market groups opposed in public interest comment filings. As Brent wrote last year,

In the fast-moving connected car marketplace, there is no reason to force products with reliability problems [like DSRC] on consumers. Any government-designed technology that is “so good it must be mandated” warrants extreme skepticism….

Further,

Rather than compel automakers to add costly DSRC systems to cars, NHTSA should consider a certification or emblem system for vehicle-to-vehicle safety technologies, similar to its five-star crash safety ratings. Light-touch regulatory treatment would empower consumer choice and allow time for connected car innovations to develop.

Fortunately, the Trump administration put the brakes on the mandate, which would have added cost and complexity to cars for uncertain and unlikely benefits.

However, some regulators and companies are trying to revive the DSRC device industry while NHTSA’s proposed DSRC mandate is on life support. Marc Scribner at CEI uncovered a sneaky attempt to create DSRC technology sales via an EPA proceeding. The stalking horse DSRC boosters have chosen is the Corporate Average Fuel Economy (CAFE) regulations—specifically the EPA’s off-cycle program. EPA and NHTSA jointly manage these regulations. That program rewards manufacturers who adopt new technologies that reduce a vehicle’s emissions in ways not captured by conventional measures like highway fuel economy.

Under the proposed rules, auto makers that install V2V or V2I capabilities can receive credit for having reduced emissions. The EPA proposal doesn’t say “DSRC” but it singles out only one technology standard that would be favored in this scheme: a standard underlying DSRC

This proposal comes as a bit of surprise for those who have followed auto technology; we’re aware of no studies showing DSRC improves emissions. (DSRC’s primary use-case today is collision warnings to the driver.) But the EPA proposes a helpful end-around that problem: simply waiving the requirement that manufacturers provide data showing a reduction in harmful emissions. Instead of requiring emissions data, the EPA proposes a much lower bar, that auto makers show that these devices merely “have some connection to overall environmental benefits.” Unless the agency applies credits in a tech-neutral way and requires more rigor in the final rules, which is highly unlikely, this looks like a backdoor subsidy to DSRC via gaming of emission reduction regulations.

Hopefully EPA regulators will discover the ruse and drop the proposal. It was a pleasant surprise last week when a DOT spokesman committed that the agency favored a tech-neutral approach for this “talking car” band. But after 20 years, this 75 MHz of spectrum gifted to DSRC device makers should be repurposed by the FCC for flexible-use. Fortunately, the FCC has started thinking about alternative uses for the DSRC spectrum. In 2015 Commissioners O’Rielly and Rosenworcel said the agency should consider flexible-use alternatives to this DSRC-only band.

The FCC would be wise to follow through and push even farther. Until the gifted spectrum that powers DSRC is reallocated to flexible use, interest groups will continue to pull any regulatory lever it has to subsidize or mandate adoption of talking-car technology. If DSRC is the best V2V technology available, device makers should win market share by convincing auto companies, not by convincing regulators.

Last month, it was my great honor to be invited to be a keynote speaker at Lincoln Network’s Reboot 2018 “Innovation Under Threat” conference. Zach Graves interviewed me for 30 minutes about a wide range of topics, including: innovation arbitrage, evasive entrepreneurialism, technopanics, the pacing problem, permissionless innovation, technological civil disobedience, existential risk, soft law and more. They’ve now posted the full event video and you can watch it down below.

National Public Radio, the Robert Wood Johnson Foundation, and the Harvard T.H. Chan School of Public Health just published a new report on “Life in Rural America.” This survey of 1,300 adults living in the rural United States has a lot to say about health issues, population change, the strengths and challenges for rural communities, as well as discrimination and drug use. But I wanted to highlight two questions related to rural broadband development that might make you update your beliefs about massive rural investment. Continue reading →

Many are understandably pessimistic about platforms and technology. This year has been a tough one, from Cambridge Analytica and Russian trolls to the implementation of GDPR and data breaches galore.

Those who think about the world, about the problems that we see every day, and about their own place in it, will quickly realize the immense frailty of humankind. Fear and worry makes sense. We are flawed, each one of us. And technology only seems to exacerbate those problems.

But life is getting better. Poverty continues nose-diving; adult literacy is at an all-time high; people around the world are living longer, living in democracies, and are better educated than at any other time in history. Meanwhile, the digital revolution has resulted in a glut of informational abundance, helping to correct the informational asymmetries that have long plagued humankind. The problem we now face is not how to address informational constraints, but how to provide the means for people to sort through and make sense of this abundant trove of data. These macro trends don’t make headlines. Psychologists know that people love to read negative articles. Our brains are wired for pessimism Continue reading →

Last week, I had the honor of being a panelist at the Information Technology and Innovation Foundation’s event on the future of privacy regulation. The debate question was simple enough: Should the US copy the EU’s new privacy law?

When we started planning the event, California’s Consumer Privacy Act (CCPA) wasn’t a done deal. But now that it has passed and presents a deadline of 2020 for implementation, the terms of the privacy conversation have changed. Next year, 2019, Congress will have the opportunity to pass a law that could supersede the CCPA and some are looking to the EU’s General Data Protection Regulation (GDPR) for guidance. Here are some reasons for not taking that path. Continue reading →

In recent months, my colleagues and I at the Mercatus Center at George Mason University have published a flurry of essays about the importance of innovation, entrepreneurialism, and “moonshots,” as well as the future of technological governance more generally. A flood of additional material is coming, but I figured I’d pause for a moment to track our progress so far. Much of this work is leading up to my next on the freedom to innovate, which I am finishing up currently.

Continue reading →

Over at the Mercatus Center Bridge blog, Trace Mitchell and I just posted an essay entitled, “A Non-Partisan Way to Help Workers and Consumers,” which discusses the new Federal Trade Commission’s (FTC) Economic Liberty Task Force report on occupational licensing.

We applaud the FTC’s calls for greater occupational licensing uniformity and portability, but regret the missed opportunity to address root problem of excessive licensing more generally. But while FTC is right to push for greater occupational licensing uniformity and portability, policymakers need to confront the sheer absurdity of licensing so many jobs that pose zero risk to public health & safety. Licensing has become completely detached from risk realities and actual public needs.

As the FTC notes, excessive licensing limits employment opportunities, worker mobility, and competition while also “resulting in higher prices, reduced quality, and less convenience for consumers.” These are unambiguous facts that are widely accepted by experts of all stripes. Both the Obama and Trump Administrations, for example, have been completely in league on the need for comprehensive  licensing reforms. Continue reading →