By Jordan Reimschisel & Adam Thierer

[Originally published on Medium on May 2, 2017.]

Americans have schizophrenic opinions about artificial intelligence (AI) technologies. Ask the average American what they think of AI and they will often respond with a combination of fear, loathing, and dread. Yet, the very same AI applications they claim to be so anxious about are already benefiting their lives in profound ways.

Last week, we posted complementary essays about the growing “technopanic” over artificial intelligence and the potential for that panic to undermine many important life-enriching medical innovations or healthcare-related applications. We were inspired to write those essays after reading the results of a recent poll conducted by Morning Consult, which suggested that the public was very uncomfortable with AI technologies. “A large majority of both Republicans and Democrats believe there should be national and international regulations on artificial intelligence,” the poll found, Of the 2,200 American adults surveyed, the poll revealed that “73 percent of Democrats said there should be U.S. regulations on artificial intelligence, as did 74 percent of Republicans and 65 percent of independents.”

We noted that there were reasons to question the significance of those in light of the binary way in which the questions were asked. Nonetheless, there are clearly some serious concerns among the public about AI and robotics. You see that when you read deeper into the poll results for specific questions and find respondents saying that they are “somewhat” to “very uncomfortable” about a wide range of specific AI applications.

Yet, in each case, Americans are already deriving significant benefits from each of the AI applications they claim to be so uncomfortable with.

Continue reading →

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge. Continue reading →

Juma book cover

“The quickest way to find out who your enemies are is to try doing something new.” Thus begins Innovation and Its Enemies, an ambitious new book by Calestous Juma that will go down as one of the decade’s most important works on innovation policy.

Juma, who is affiliated with the Harvard Kennedy School’s Belfer Center for Science and International Affairs, has written a book that is rich in history and insights about the social and economic forces and factors that have, again and again, lead various groups and individuals to oppose technological change. Juma’s extensive research documents how “technological controversies often arise from tensions between the need to innovate and the pressure to maintain continuity, social order, and stability” (p. 5) and how this tension is “one of today’s biggest policy challenges.” (p. 8)

What Juma does better than any other technology policy scholar to date is that he identifies how these tensions develop out of deep-seated psychological biases that eventually come to affect attitudes about innovations among individuals, groups, corporations, and governments. “Public perceptions about the benefits and risks of new technologies cannot be fully understood without paying attention to intuitive aspects of human psychology,” he correctly observes. (p. 24) Continue reading →

Permissionless Innovation 2nd edition book cover -1I am pleased to announce the release of the second edition of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. As with the first edition, the book represents a short manifesto that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. The book attempts to accomplish two major goals.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →

This article originally appeared at techfreedom.org.

Twenty years ago today, President Clinton signed the Telecommunications Act of 1996. John Podesta, his chief of staff immediately saw the problem: “Aside from hooking up schools and libraries, and with the rather major exception of censorship, Congress simply legislated as if the Net were not there.”

Here’s our take on what Congress got right (some key things), what it got wrong (most things), and what an update to the key laws that regulate the Internet should look like. The short version is:

  • End FCC censorship of “indecency”
  • Focus on promoting competition
  • Focus regulation on consumers rather than arbitrary technological silos or political whim
  • Get the FCC out of the business of helping government surveillance

Trying, and Failing, to Censor the Net

Good: The Act is most famous for Section 230, which made Facebook and Twitter possible. Without 230, such platforms would have been held liable for the speech of their users — just as newspapers are liable for letters to the editor. Trying to screen user content would simply have been impossible. Sharing user-generated content (UGC) on sites like YouTube and social networks would’ve been tightly controlled or simply might never have taken off. Without Section 230, we might all still be locked in to AOL!

Bad: Still, the Act was very much driven by a technopanic over “protecting the children.”

  • Internet Censorship. 230 was married to a draconian crackdown on Internet indecency. Aimed at keeping pornography away from minors, the rest of the Communications Decency Act — rolled into the Telecom Act — would have required age verification of all users, not just on porn sites, but probably any UGC site, too. Fortunately, the Supreme Court struck this down as a ban on anonymous speech online.
  • Broadcast Censorship. Unfortunately, the FCC is still in the censorship business for traditional broadcasting. The 1996 Act did nothing to check the agency’s broad powers to decide how long a glimpse of a butt or a nipple is too much for Americans’ sensitive eyes.

Unleashing Competition—Slowly

Good: Congress unleashed over $1.3 trillion in private broadband investment, pitting telephone companies and cable companies against each other in a race to serve consumers — for voice, video andbroadband service.

  • Legalizing Telco Video. In 1984, Congress had (mostly) prohibited telcos from providing video service — largely on the assumption that it was a monopoly. Congress reversed that, which eventually meant telcos had the incentive to invest in networks that could carry video — and super-fast broadband.
  • Breaking Local Monopolies. Congress also barred localities from blocking new entry by denying a video “franchise.”
  • Encouraging Cable Investment. The 1992 Cable Act had briefly imposed price regulation on basic cable packages. This proved so disastrous that the Democratic FCC retreated — but only after killing a cycle of investment and upgrades, delaying cable modem service by years. In 1996, Congress finally put a stake through the heart of such rate regulation, removing investment-killing uncertainty.

Bad: While the Act laid the foundations for what became facilities-based network competition, its immediate focus was pathetically short-sighted: trying to engineer artificial competition for telephone service.

  • Unbundling Mandates. The Act created an elaborate set of requirements that telephone companies “unbundle” parts of their networks so that resellers could use them, at sweetheart prices, to provide “competitive” service. The FCC then spent the next nine years fighting over how to set these rates.
  • Failure of Vision. Meanwhile, competing networks provided fierce competition: cable providers gained over half the telephony market with a VoIP service, and 47% of customers have simply cut the cord — switching entirely to wireless. Though the FCC refuses to recognize it, broadband is becoming more competitive, too: 2014 saw telcos invest in massive upgrades, bringing 25–75 Mbps speeds to more than half the country by pushing fiber closer to homes. The cable-telco horse race is fiercer than ever — and Google Fiber has expanded its deployment of a third pipe to the home, while cable companies are upgrading to provide gigabit-plus speeds and wireless broadband has become a real alternative for rural America.
  • Delaying Fiber. The greatest cost of the FCC’s unbundling shenanigans was delaying the major investments telcos needed to keep up with cable. Not until 2003 did the FCC make clear that it would not impose unbundling mandates on fiber — which pushed Verizon to begin planning its FiOS fiber-to-the-home network. The other crucial step came in 2006, when the Commission finally clamped down on localities that demanded lavish ransoms for allowing the deployment of new networks, which stifled competition.

Regulation

Good: With the notable exception of unbundling mandates, the Act was broadly deregulatory.

  • General thrust. Congress could hardly have been more clear: “It is the policy of the United States… to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”
  • Ongoing Review & Deregulation. Congress gave the FCC broad discretion to ratchet down regulation to promote competition.

Bad: The Clinton Administration realized that technological change was rapidly erasing the lines separating different markets, and had proposed a more technology-neutral approach in 1993. But Congress rejected that approach. The Act continued to regulate by dividing technologies into silos: broadcasting (Title III), telephone (Title II) and cable (Title VI). Title I became a catch-all for everything else. Crucially, Congress didn’t draw a clear line between Title I and Title II, setting in motion a high-stakes fight that continues today.

  • Away from Regulatory Silos. Bill Kennard, Clinton’s FCC Chairman, quickly saw just how obsolete the Act was. His 1999 Strategic Plan remains a roadmap for FCC reform.
  • Away from Title II. Kennard also indicated that he favored placing all broadband in Title I — mainly because he understood that Title II was designed for a monopoly and would tend to perpetuate it. Vibrant competition between telcos and cable companies could happen only under Title I. But it was the Bush FCC that made this official, classifying cable modem as Title I in 2002 and telco DSL in 2005.
  • Net Neutrality Confusion. The FCC spent a decade trying to figure out how to regulate net neutrality, losing in court twice, distracting the agency from higher priorities — like promoting broadband deployment and adoption — and making telecom policy, once an area of non-partisan pragmatism, a fiercely partisan ideological cesspool.
  • Back to Title II. In 2015, the FCC reclassified broadband under Title II — not because it didn’t have other legal options for regulating net neutrality, but because President Obama said it should. He made the issue part of his re-assertion of authority after Democrats lost the 2014 midterm elections. Net neutrality and Title II became synonymous, even though they have little to do with each other. Now, the FCC’s back in court for the third time.
  • Inventing a New Act. Unless the courts stop it, the FCC will exploit the ambiguities of the ‘96 Act to essentially write a new Act out of thin air: regulating way up with Title II, using its forbearance powers to temporarily suspend politically toxic parts of the Act (like unbundling), and inventing wholly new rules that give the FCC maximum discretion—while claiming the power to do anything that somehow promotes broadband. The FCC calls this all “modernization” but it’s really a staggering power grab that allows the FCC to control the Internet in the murkiest way possible.
  • Bottom line: The 1996 Act gives the FCC broad authority to regulate in the “public interest,” without effectively requiring the FCC to gauge the competitive effects of what it does. The agency’s stuck in a kind of Groundhog Day of over-regulation, constantly over-doing it without ever learning from its mistakes.

Time for a #CommActUpdate

Censorship. The FCC continues to censor dirty words and even brief glimpses of skin on television because of a 1978 decision that assumes parents are helpless to control their kids’ media consumption. Today, parental control tools make this assumption obsolete: parents can easily block programming marked as inappropriate. Congress should require the FCC to focus on outright obscenity — and let parents choose for themselves.

Competition. If the 1996 Act served to allow two competing networks, a rewrite should focus on driving even fiercer cable-telco competition, encouraging Google Fiber and others to build a third pipe to the home, and making wireless an even stronger competitor.

  • Title II. If you wanted to protect cable companies from competition, you couldn’t find a better way to do it than Title II. Closing that Pandora’s Box forever will encourage companies like Google Fiber to enter the market. But Congress needs to finish what the 1996 Act started: it’s not enough to stop localities from denying franchises video service (and thus broadband, too).
  • Local Barriers. Congress should crack down on the moronic local practices that have made deployment of new networks prohibitive — learning from the success of Google Fiber cities, which have cut red tape, lowered fees and generally gotten out of the way. Pending bipartisan legislationwould make these changes for federal assets, and require federal highway projects to include Dig Once conduits to make fiber deployment easier. That’s particularly helpful for rural areas, which the FCC has ignored, but making deployment easier inside cities will require making municipal rights of way easier to use. Instead of rushing to build their own broadband networks, localities should have to first at least try to stimulate private deployment.

Regulation. Technological silos made little sense in 1993. Today, they’re completely obsolete.

  • Unchecked Discretion. The FCC’s right about one thing: rigid rules don’t make sense either, given how fast technology is changing. But giving the FCC sweeping discretion is even more dangerous: it makes regulating the Internet inherently political, subject to presidential whim and highly sensitive to elections.
  • The Fix. There’s a simple solution: write clear standards that let the FCC work across all communications technologies, but that require the FCC to prove that its tinkering actually makes consumers better off. As long as the FCC can do whatever it claims is in the “public interest,” the Internet will never be safe.
  • Rethinking the FCC. Indeed, Congress should seriously consider breaking up the FCC, transferring its consumer protection functions to the Federal Trade Commission and its spectrum functions to the Commerce Department.

Encryption. Since 1994, the FCC has had the power to require “telecommunications services” to be wiretap-ready — and the discretion to decide how to interpret that term. Today, the FBI is pushing for a ban on end-to-end encryption — so law enforcement can get backdoor access into services like Snapchat. Unfortunately, foreign governments and malicious hackers could use those backdoors, too. Congress is stalling, but the FCC could give law enforcement exactly what it wants — using the same legal arguments it used to reclassify mobile broadband under Title II. Law enforcement is probably already using this possibility to pressure Internet companies against adopting secure encryption. Congress should stop the FCC from requiring back doors.

Smart Device Paranoia

by on October 5, 2015 · 0 comments

The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future, the ACLU tech blog.

Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:

When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.

While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.

Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.

More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples. In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.

Continue reading →

Tech Policy Threat Matrix

by on September 24, 2015 · 1 comment

On the whiteboard that hangs in my office, I have a giant matrix of technology policy issues and the various policy “threat vectors” that might end up driving regulation of particular technologies or sectors. Along with my colleagues at the Mercatus Center’s Technology Policy Program, we constantly revise this list of policy priorities and simultaneously make an (obviously quite subjective) attempt to put some weights on the potential policy severity associated with each threat of intervention. The matrix looks like this: [Sorry about the small fonts. You can click on the image to make it easier to see.]

 

Tech Policy Issue Matrix 2015

I use 5 general policy concerns when considering the likelihood of regulatory intervention in any given area. Those policy concerns are:

  1. privacy (reputation issues, fear of “profiling” & “discrimination,” amorphous psychological / cognitive harms);
  2. safety (health & physical safety or, alternatively, child safety and speech / cultural concerns);
  3. security (hacking, cybersecurity, law enforcement issues);
  4. economic disruption (automation, job dislocation, sectoral disruptions); and,
  5. intellectual property (copyright and patent issues).

Continue reading →

It was my pleasure this week to be invited to deliver some comments at an event hosted by the Information Technology and Innovation Foundation (ITIF) to coincide with the release of their latest study, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies.” The goal of the new ITIF report, which was co-authored by Daniel Castro and Alan McQuinn, is to highlight the dangers associated with “the cycle of panic that occurs when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on.” (p. 1)

As Castro and McQuinn describe it, the privacy panic cycle “charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.” They divide this cycle into four phases: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On. Here’s how they depict it in an image:

Privacy Panic Cycle - 1

 

Continue reading →

The Obama Administration has just released a draft “Consumer Privacy Bill of Rights Act of 2015.” Generally speaking, the bill aims to translate fair information practice principles (FIPPs) — which have traditionally been flexible and voluntary guidelines — into a formal set of industry best practices that would be federally enforced on private sector digital innovators. This includes federally-mandated Privacy Review Boards, approved by the Federal Trade Commission, the agency that will be primarily responsible for enforcing the new regulatory regime.

Many of the principles found in the Administration’s draft proposal are quite sensible as best practices, but the danger here is that they could soon be converted into a heavy-handed, bureaucratized regulatory regime for America’s highly innovative, data-driven economy.

No matter how well-intentioned this proposal may be, it is vital to recognize that restrictions on data collection could negatively impact innovation, consumer choice, and the competitiveness of America’s digital economy.

Online privacy and security is vitally important, but we should look to use alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.

That is why flexible data collection and use policies and evolving best practices will ultimately serve consumers better than one-size-fits all, top-down regulatory edicts. Continue reading →

by Adam Thierer & Andrea Castillo

Cybersecurity policy is a big issue this year, so we thought it be worth reminding folks of some contributions to the literature made by Mercatus Center-affiliated scholars in recent years. Our research, which can be found here, can be condensed to these five core points:

1)         Institutions, societies, and economies are more resilient than we give them credit for and can deal with adversity, even cybersecurity threats.

See: Sean Lawson, “Beyond Cyber-Doom: Assessing the Limits of Hypothetical Scenarios in the Framing of Cyber-Threats,” December 19, 2012.

2)         Companies and organizations have a vested interest in finding creative solutions to these problems through ongoing experimentation and they are pursing them with great vigor.

See: Eli Dourado, “Internet Security Without Law: How Service Providers Create Order Online,” June 19, 2012.

3)         Over-arching, top-down “cybersecurity frameworks” threaten to undermine dynamism in cybersecurity and Internet governance, and could promote rent-seeking and corruption. Instead, the government should foster continued dynamic cybersecurity efforts through the development of a robust private-sector cybersecurity insurance market.

See: Eli Dourado and Andrea Castillo, “Why the Cybersecurity Framework Will Make Us Less Secure,” April 17, 2014.

4)         The language sometimes used to describe cybersecurity threats sometimes borders on “techno-panic” rhetoric that is based on “threat inflation.

See the Lawson paper already cited as well as: Jerry Brito & Tate Watkins “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” April 10, 2012; and Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” January 25, 2013.

5)         Finally, taking these other points into account, our scholars have conclude that academics and policymakers should be very cautious about how they define “market failure” in the cybersecurity context. Moreover, to the extent they propose new regulatory controls to address perceived problems, those rules should be subjected to rigorous benefit-cost analysis.

See: Eli Dourado, “Is There a Cybersecurity Market Failure,” January 23, 2012.

 

Continue reading →