Adam is a senior research fellow at the Mercatus Center at George Mason University. He previously served as President of the Progress & Freedom Foundation, Director of Telecom. Studies at the Cato Institute, and Fellow in Economic Policy at the Heritage Foundation.
Sen. Warren makes many interesting points about the dangers of regulatory capture, but the heart of her argument about how to deal with the problem can basically be summarized as ‘Let’s Build a Better Breed of Bureaucrat and Give Them More Money.’ In her own words, she says we should “limit opportunities for ‘cultural’ capture'” of government officials and also “give agencies the money that they need to do their jobs.”
It may sound good in theory, but I’m always a bit perplexed by that argument because the implicit claims here are that:
(a) the regulatory officials of the past were somehow less noble-minded and more open to corruption than some hypothetical better breed of bureaucrat that is out there waiting to be found and put into office; and
(b) that the regulatory agencies of the past were somehow starved for resources and lacked “the money that they need to do their jobs.”
Neither of these assumptions is true and yet those arguments seem to animate most of the reform proposals set forth by progressive politicians and scholars for how to deal with the problem of capture. Continue reading →
In theory, the Food & Drug Administration (FDA) exists to save lives and improve health outcomes. All too often, however, that goal is hindered by the agency’s highly bureaucratic, top-down, command-and-control orientation toward drug and medical device approval.
Today’s case in point involves families of children with diabetes, many of whom are increasingly frustrated with the FDA’s foot-dragging when it comes to approval of medical devices that could help their kids. Writing today in The Wall Street Journal, Kate Linebaugh discusses how “Tech-Savvy Families Use Home-Built Diabetes Device” to help their kids when FDA regulations limit the availability of commercial options. She documents how families of diabetic children are taking matters into their own hands and creating their own home-crafted insulin pumps, which can automatically dose the proper amount of proper amount of the hormone in response to their child’s blood-sugar levels. Families are building, calibrating, and troubleshooting these devices on their own. And the movement is growing. Linebaugh reports that:
More than 50 people have soldered, tinkered and written software to make such devices for themselves or their children. The systems—known in the industry as artificial pancreases or closed loop systems—have been studied for decades, but improvements to sensor technology for real-time glucose monitoring have made them possible.
The Food and Drug Administration has made approving such devices a priority and several companies are working on them. But the yearslong process of commercial development and regulatory approval is longer than many patients want, and some are technologically savvy enough to do it on their own.
Linebaugh notes that this particular home-built medical project (known as OpenAPS), was created by Dana Lewis, a 27-year-old with Type 1 diabetes in Seattle. Linebaugh says that: Continue reading →
On May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.
Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.
Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.
I am pleased to announce the release of the second edition of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. As with the first edition, the book represents a short manifesto that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. The book attempts to accomplish two major goals.
First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.
One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.
The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.
I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →
As “software eats the world,” the reach of the Digital Revolution continues to expand to far-flung fields and sectors. The ramifications of this are tremendously exciting but at times can also be a little bit frightening.
Consider this recent Washington Post headline: “A College Kid Spends $60 to Straighten His Own Teeth. What Could Possibly Go Wrong?” Matt McFarland of the Post reports that, “A college student has received a wealth of interest in his dental work after publishing an account of straightening his own teeth for $60.” The student at the New Jersey Institute of Technology, “had no dentistry experience when he decided to create plastic aligners to improve his smile,” but was able to use a 3D printer and laser scanner on campus to accomplish the job. “After publishing before-and-after pictures of his teeth this month, [the student] has received hundreds of requests from strangers, asking him to straighten their teeth.”
McFarland cites many medical professionals who are horrified at the prospect of patients taking their health decisions into own hands and engaging in practices that could be dangerous to themselves and others. Some of the licensed practitioners cited in the story come across as just being bitter losers as they face the potential for the widespread disintermediation of their profession. After all, they currently charge thousands of dollars for various dental procedures and equipment. Thanks to technological innovations, however, those costs could soon plummet, which could significantly undercut their healthy margins on dental services and equipment. On the other hand, these professionals have a fair point about untrained citizens doing their own dental work or giving others the ability to do so. Things certainly could go horribly wrong.
This is another interesting case study related to the subject of a forthcoming Mercatus paper as well as an upcoming law review article on 3D printing of mine, both of which pose the following question: What happens when radically decentralized technological innovation (such as 3D printing) gives people a de facto “right to try” new medicines and medical devices? Continue reading →
U.S. Commodity Futures Trading Commission (CFTC) Commissioner J. Christopher Giancarlo delivered an amazing address this week before the Depository Trust & Clearing Corporation 2016 Blockchain Symposium. The title of his speech was “Regulators and the Blockchain: First, Do No Harm,” and it will go down as the definitive early statement about how policymakers can apply a principled, innovation-enhancing policy paradigm to distributed ledger technology (DLT) or “blockchain” applications.
“The potential applications of this technology are being widely imagined and explored in ways that will benefit market participants, consumers and governments alike,” Giancarlo noted in his address. But in order for that to happen, he said, we have to get policy right. “It is time again to remind regulators to ‘do no harm,'” he argued, and he continued on to note that
The United States’ global leadership in technological innovation of the Internet was built hand-in-hand with its enlightened “do no harm” regulatory framework. Yet, when the Internet developed in the mid-1990s, none of us could have imagined its capabilities that we take for granted today. Fortunately, policymakers had the foresight to create a regulatory environment that served as a catalyst rather than a choke point for innovation. Thanks to their forethought and restraint, Internet-based applications have revolutionized nearly every aspect of human life, created millions of jobs and increased productivity and consumer choice. Regulators must show that same forethought and restraint now [for the blockchain].
What Giancarlo is referring to is the approach that the U.S. government adopted toward the Internet and digital networks in the mid-1990s. You can think of this vision as “permissionless innovation.” As I explain in my recent book of the same title, permissionless innovation refers to the notion that we should generally be free to experiment and learn new and better ways of doing things through ongoing trial-and-error. Continue reading →
Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals. But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.
In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,” a “cyber cold war,” or even a “cyber 9/11.” These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code. Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”
Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security. Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.” Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security. Continue reading →
The success of the Internet and the modern digital economy was due to its open, generative nature, driven by the ethos of “permissionless innovation.” A “light-touch” policy regime helped make this possible. Of particular legal importance was the immunization of online intermediaries from punishing forms of liability associated with the actions of third parties.
As “software eats the world” and the digital revolution extends its reach to the physical world, policymakers should extend similar legal protections to other “generative” tools and platforms, such as robotics, 3D printing, and virtual reality.
I was shocked and saddened to hear tonight that L.A. Superior Court Judge Dan Brenner was struck and killed in Los Angeles yesterday. I am just sick about it. He was a great man and good friend.
Dan was an outstanding legal mind who, before moving back out to California to become a judge in 2012, made a big impact here in DC while serving as a legal advisor to FCC chairman Mark Fowler in the 1980s. He went on to have a distinguished career as head of legal affairs at the National Cable & Telecommunications Association. He also served as an adjunct law professor in major law schools and wrote important essays and textbooks on media and broadband law.
More than all that, Dan Brenner was a dear friend to a great many people, and he was always the guy with the biggest smile on his face in any room he walked into. Dan had an absolutely infectious spirit; his amazing wit and wisdom inspired everyone around him. I never heard a single person say a bad word about Dan Brenner. Even people on the opposite side of any negotiating table from him respected and admired him. That’s pretty damn rare in a town like Washington, DC.
Throughout the year, I collect some of the more notable tech policy-related essays that I’ve read and then publish an end-of-year list here. (Here, for example, are my end-of-year lists from 2014 and 2013.) So, here are some of my favorite essays and editorials from 2015. (Note: They are just in chronological order. No ranking here.)
Larry Downes – “Take note Republicans and Democrats, this is what a pro-innovation platform looks like,” Washington Post, January 7. (Downes explains how governments need to adapt to accommodate and embrace new forms of technological innovation. He notes: “Here at home, the opportunity to wrap themselves in the flag of innovation is knocking for both parties, but so far there are few takers. Republicans and Democrats regularly invoke the rhetoric of innovation, entrepreneurship, and the transformative power of technology. But in reality neither party pursues policies that favor the disruptors. Instead, where lawmakers once took a largely hands-off approach to Silicon Valley, as the Internet revolution enters a new stage of industry transformation, the temptation to intervene, to usurp, to micromanage, to circumscribe the future — becomes irresistible.”) Equally excellent was Larry’s essay later in the year, “Fewer, Faster, Smarter.” (“As the technology revolution proceeds, the concept of government may return to its pre-industrial roots, setting the most basic rules of the economy and standing by as regulator of last resort when markets fail for some or all consumers over an extended period of time. Even then, the solution may simply be to tweak the incentives to encourage better behavior, rather than more full-fledged—and usually ill-fated—micromanagement of fast-changing industries.”)
Bryant Walker Smith – “Slow Down That Runaway Ethical Trolley,” CIS Blog, January 12. (Smith, a leading expert on autonomous vehicle systems, notes that, while serious ethical dilemmas will always be present with such technologies, we should not allow the perfect to be the enemy of the good. “The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?”)
Andrew McAfee – “Who are the humanists, and why do they dislike technology so much?” Financial Times, July 7, 2015. (A brief but brilliant exploration of the philosophical fight over differing conceptions of “humanism.” McAfee, appropriately in my opinion, calls into question technological critics who self-label themselves “humanists” and then suggest that those who believe in the benefits of technological innovation and progress are somehow opposed to humanity. In reality, of course, nothing could be further from the truth!)
Jocelyn Brewer – “Techno-Fear is Hurting Kids, Not Their Use of Digital Devices,” July 7, 2015. (A beautiful piece that makes it clear why “the Internet… is not addictive. Technology is not a drug.” Brewer continues on to make the case for avoiding fear-based messaging about Internet problems and instead adopting a more sensible approach: “Rather than trotting out interminable lists of the negative consequences of our adoption of technology lets raise awareness of how to avoid the pitfalls of not approaching this new era with solutions and proactive thinking.” Amen, sister!)
Evan Ackerman – “We Should Not Ban ‘Killer Robots,’ and Here’s Why,” IEEE Spectrum, July 29, 2015, (A thought-provoking piece about a controversial subject in which Ackerman argues that “banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil”)
Tim O’Reilly – “Networks and the Nature of the Firm,” Medium, August 14, 2015. (Explores the economics of the sharing economy and “the huge economic shift led by software and connectedness.”)
Joe Queenan – “America’s Need for Pointless Updates and Cat Videos,” Wall Street Journal, December 3, 2015. (“The back-to-nature, turn-off-your-cellphone movement is based on a false assumption. . . . Time not spent doing dumb stuff would otherwise be wasted doing other dumb stuff. It’s called ‘play,’ without which Jack is a dull boy. It is a variation on the old saying that nature abhors a vacuum. So nature created the Internet.”)