November 2018

By Adam Thierer & Jennifer Huddleston Skees

He’s making a list and checking it twice. Gonna find out who’s naughty and nice.”

With the Christmas season approaching, apparently it’s not just Santa who is making a list. The Trump Administration has just asked whether a long list of emerging technologies are naughty or nice — as in whether they should be heavily regulated or allowed to be developed and traded freely.

If they land on the naughty list, these technologies could be subjected to complex export control regulations, which would limit research and development efforts in many emerging tech fields and inadvertently undermine U.S. innovation and competitiveness. Worse yet, it isn’t even clear there would be any national security benefit associated with such restrictions.  

From Light-Touch to a Long List

Generally speaking, the Trump Administration has adopted a “light-touch” approach to the regulation of emerging technology and relied on more flexible “soft law” approaches to high-tech policy matters. That’s what makes the move to impose restrictions on the trade and usage of these emerging technologies somewhat counter-intuitive. On November 19, the Department of Commerce’s Bureau of Industry and Security launched a “Review of Controls for Certain Emerging Technologies.” The notice seeks public comment on “criteria for identifying emerging technologies that are essential to U.S. national security, for example because they have potential conventional weapons, intelligence collection, weapons of mass destruction, or terrorist applications or could provide the United States with a qualitative military or intelligence advantage.” Continue reading →

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that, “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption, Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered, “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses. Continue reading →

Last week, science writer Michael Shermer tweeted out this old xkcd comic strip that I had somehow missed before. Shermer noted that it represented, “another reply to pessimists bemoaning modern technologies as soul-crushing and isolating.” Similarly, there’s this meme that has been making the rounds on Twitter and which jokes about how newspapers made us as antisocial in the past much as newer technologies supposedly do today.

‏The sentiments expressed by the comic and that image make it clear how people often tend to romanticize past technologies or fail to remember that many people expressed the same fears about them as critics do today about newer ones. I’ve written dozens of articles about “moral panics” and “techno-panics,” most of which are cataloged here. The common theme of those essays is that, when it comes to fears about innovations, there really is nothing new under the sun. Continue reading →

Until recently, I wasn’t familiar with Freedom House’s Freedom on the Net reports. Freedom House has useful recommendations for Internet non-regulation and for protecting freedom of speech. Their Freedom on the Net Reports make an attempt at grading a complex subject: national online freedoms.

However, their latest US report came to my attention. Tech publications like TechCrunch and Internet regulation advocates were trumpeting the report because it touched on net neutrality. Freedom House penalized the US score in the US report because the FCC a few months ago repealed the so-called net neutrality rules from 2015.

The authors of the US report reached a curious conclusion: Internet deregulation means a loss of online freedom. In 2015, the FCC classified Internet services as a “Title II” common carrier service. In 2018, the FCC, reversed course, and shifted Internet services from one of the most-regulated industries in the US to one of least-regulated industries. This 2018 deregulation, according to the Freedom House US report, creates an “obstacle to access” and, while the US is still “free,” regulation repeal moves the US slightly in the direction of “digital authoritarianism.”   Continue reading →

Recently, Noah Smith explored an emerging question in tech. Is there a kill zone where new and innovative upstarts are being throttled by the biggest players? He explains,

Facebook commissioned a study by consultant Oliver Wyman that concluded that venture investment in the technology sector wasn’t lower than in other sectors, which led Wyman to conclude that there was no kill zone.

But economist Ian Hathaway noted that looking at the overall technology industry was too broad. Examining three specific industry categories — internet retail, internet software and social/platform software, corresponding to the industries dominated by Amazon, Google and Facebook, respectively — Hathaway found that initial venture-capital financings have declined by much more in the past few years than in comparable industries. That suggests the kill zone is real.

A recent paper by economists Wen Wen and Feng Zhu reaches a similar conclusion. Observing that Google has tended to follow Apple in deciding which mobile-app markets to enter, they assessed whether the threat of potential entry by Google (as measured by Apple’s actions) deters innovation by startups making apps for Google’s Android platform. They conclude that when the threat of the platform owner’s entry is higher, fewer app makers will be interested in offering a product for that particular niche. A 2014 paper by the same authors found similar results for Amazon and third-party merchants using its platform.

So, are American tech companies making it difficult for startups? Perhaps, but there are some other reasons to be skeptical. Continue reading →

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism, O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making. Continue reading →

By Brent Skorup and Trace Mitchell

An important benefit of 5G cellular technology is more bandwidth and more reliable wireless services. This means carriers can offer more niche services, like smart glasses for the blind and remote assistance for autonomous vehicles. A Vox article last week explored an issue familiar to technology experts: will millions of new 5G transmitters and devices increase cancer risk? It’s an important question but, in short, we’re not losing sleep over it.

5G differs from previous generations of cellular technology in that “densification” is important–putting smaller transmitters throughout neighborhoods. This densification process means that cities must regularly approve operators’ plans to upgrade infrastructure and install devices on public rights-of-way. However, some homeowners and activists are resisting 5G deployment because they fear more transmitters will lead to more radiation and cancer. (Under federal law, the FCC has safety requirements for emitters like cell towers and 5G. Therefore, state and local regulators are not allowed to make permitting decisions based on what they or their constituents believe are the effects of wireless emissions.)

We aren’t public health experts; however, we are technology researchers and decided to explore the telecom data to see if there is a relationship. If radio transmissions increase cancer, we should expect to see a correlation between the number of cellular transmitters and cancer rates. Presumably there is a cumulative effect: the more cellular radiation people are exposed to, the higher the cancer rates.

From what we can tell, there is no link between cellular systems and cancer. Despite a huge increase in the number of transmitters in the US since 2000, the nervous system cancer rate hasn’t budged. In the US the number of wireless transmitters have increased massively–300%–in 15 years. (This is on the conservative side–there are tens of millions of WiFi devices that are also transmitting but are not counted here.) Continue reading →

By Brent Skorup and Michael Kotrous

In 1999, the FCC completed one of its last spectrum “beauty contests.” A sizable segment of spectrum was set aside for free for the US Department of Transportation (DOT) and DOT-selected device companies to develop DSRC, a communications standard for wireless automotive communications, like vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). The government’s grand plans for DSRC never materialized and in the intervening 20 years, new tech—like lidar, radar, and cellular systems—advanced and now does most of what regulators planned for DSRC.

Too often, however, government technology plans linger, kept alive by interest groups that rely on the new regulatory privilege, even when the market moves on. At the eleventh hour of the Obama administration, NHTSA proposed mandating DSRC devices in all new vehicles, an unprecedented move that Brent and other free-market groups opposed in public interest comment filings. As Brent wrote last year,

In the fast-moving connected car marketplace, there is no reason to force products with reliability problems [like DSRC] on consumers. Any government-designed technology that is “so good it must be mandated” warrants extreme skepticism….

Further,

Rather than compel automakers to add costly DSRC systems to cars, NHTSA should consider a certification or emblem system for vehicle-to-vehicle safety technologies, similar to its five-star crash safety ratings. Light-touch regulatory treatment would empower consumer choice and allow time for connected car innovations to develop.

Fortunately, the Trump administration put the brakes on the mandate, which would have added cost and complexity to cars for uncertain and unlikely benefits.

However, some regulators and companies are trying to revive the DSRC device industry while NHTSA’s proposed DSRC mandate is on life support. Marc Scribner at CEI uncovered a sneaky attempt to create DSRC technology sales via an EPA proceeding. The stalking horse DSRC boosters have chosen is the Corporate Average Fuel Economy (CAFE) regulations—specifically the EPA’s off-cycle program. EPA and NHTSA jointly manage these regulations. That program rewards manufacturers who adopt new technologies that reduce a vehicle’s emissions in ways not captured by conventional measures like highway fuel economy.

Under the proposed rules, auto makers that install V2V or V2I capabilities can receive credit for having reduced emissions. The EPA proposal doesn’t say “DSRC” but it singles out only one technology standard that would be favored in this scheme: a standard underlying DSRC

This proposal comes as a bit of surprise for those who have followed auto technology; we’re aware of no studies showing DSRC improves emissions. (DSRC’s primary use-case today is collision warnings to the driver.) But the EPA proposes a helpful end-around that problem: simply waiving the requirement that manufacturers provide data showing a reduction in harmful emissions. Instead of requiring emissions data, the EPA proposes a much lower bar, that auto makers show that these devices merely “have some connection to overall environmental benefits.” Unless the agency applies credits in a tech-neutral way and requires more rigor in the final rules, which is highly unlikely, this looks like a backdoor subsidy to DSRC via gaming of emission reduction regulations.

Hopefully EPA regulators will discover the ruse and drop the proposal. It was a pleasant surprise last week when a DOT spokesman committed that the agency favored a tech-neutral approach for this “talking car” band. But after 20 years, this 75 MHz of spectrum gifted to DSRC device makers should be repurposed by the FCC for flexible-use. Fortunately, the FCC has started thinking about alternative uses for the DSRC spectrum. In 2015 Commissioners O’Rielly and Rosenworcel said the agency should consider flexible-use alternatives to this DSRC-only band.

The FCC would be wise to follow through and push even farther. Until the gifted spectrum that powers DSRC is reallocated to flexible use, interest groups will continue to pull any regulatory lever it has to subsidize or mandate adoption of talking-car technology. If DSRC is the best V2V technology available, device makers should win market share by convincing auto companies, not by convincing regulators.