I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.
But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.
The Costs of Control
But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc. Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.
I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration.
A Hypothetical Regulatory Scenario
Here’s an interesting case study to consider in this regard: Can 3D printing of prosthetics be controlled? Clearly prosthetics are medical devices in the traditional regulatory sense, but few people are going to the FDA and asking for permission or a “right to try” new 3D-printed limbs. They’re just doing it. And the results have been incredibly exciting, as my Mercatus Center colleague Robert Graboyes has noted.
But let’s imagine what the regulators might do if they really wanted to impose their will and limit the right to try in this context:
- Could government officials ban 3D printers outright? I don’t see how. The technology is already too diffuse and is utilized for so many alternative (and uncontroversial) uses that it doesn’t seem likely such a control regime would work or be acceptable. And if any government did take this extreme step, “global innovation arbitrage” would kick in. That is, innovators would just move offshore.
- Could government officials ban the inputs used by 3D printers? Again, I don’t see how. After all, we are primarily talking about plastics and glue here!
- Could government officials ban 3D printer blueprints? Two problems with that. First, such blueprints are a form of free speech and government efforts to censor them would represent a form of prior restraint that would violate the First Amendment of the U.S. Constitution. Second, even ignoring the First Amendment issues, information control is just damned hard and I don’t see how you could suppress such blueprints effectively when are they are freely available across the Internet. Or, people would just “torrent” them, as they do (illegally) with copyrighted files today.
- Could government officials ban and/or fine specific companies (especially those with deep pockets)? Perhaps, but that is likely a losing strategy since 3D printing is already so highly decentralized and is done by average citizens in the comfort of their own home (and often for no monetary gain). So, attempting to go after a handful of corporate players and “make an example out of them” to deter others from experimenting isn’t likely to work. And, again, it’ll just lead to more offshoring and undergrounding of these devices and innovative activities.
- Could government officials ban the sale of certain 3D printing applications? They could try, but enterprising minds would likely start using alternative payment methods (like Bitcoin) to conduct their deals. But the question of payments is largely irrelevant in many fields because much of this activity is non-commercial and open-source in character. People are freely distributing blueprints for 3D-printed prosthetics, for example, and they are even giving away the actual 3D-printed prosthetic devices to those who need them.
- Could government officials just create a licensing / approval regime for narrowly-targeted 3D printed medical devices? Of course, but for all the reasons outlined above, it would likely be pretty easy to evade such a regime. Moreover, the very effort to enforce such a licensing regime would likely deter many beneficial innovations in the process, while also leading to the old cronyist problems associated with firms engaging in rent-seeking and courting favor with regulators in order to survive or prosper.
Anyway, you get the point: The practicality of control makes a difference and at some point the enormous costs associated with enforcement become an ethical matter in its own right. Stated differently, it’s not just that citizens should generally be at liberty to determine their own treatments and decide what drugs they ingest and what medical devices they use, it’s also the case that regulatory efforts aimed at limiting that right have so many corresponding enforcement costs that can spillover on to society more generally. And that’s an ethical matter of a different sort when you get right down to it. But, at a minimum, it’s an increasingly costly strategy and the costs associated with such technological control regimes should be considered closely and quantified where possible.
The Need for a Shift toward Risk Education
Let’s return to the question I raised above regarding the educational role that the FDA, or governments more generally, could play in the future. As I noted, a world in which citizens are granted the liberty to make more of their own health decisions is a world in which they could, at times, be rolling the dice with their health and lives. The highly paternalistic approach of modern food and drug regulation is rooted in the belief that citizens simply cannot be trusted to make such decisions on their own because they will never be able to appreciate the relative risks. You might be surprised to hear that I am somewhat sympathetic to that argument. People can and do make rash and unwise decisions about their health based on misinformation or a general lack of quality information presented in an easy-to-understand fashion. As a result, policymakers have taken the right to make these decisions away from us in many circumstances.
Although motivated by the best of intentions, paternalistic controls are not the optimal way to address these concerns. The better approach is rooted in risk education. To reiterate, the wise way to deal with the potential downsides associated with freedom of choice is to educate citizens about the relative risks associated with various medical treatments and devices, not to forbid them from seeking them out at all.
What does that mean for the future of the FDA? If the agency was smart, it would recognize that traditional command-and-control regulation is no longer a sensible strategy; it’s increasingly unworkable and imposes too many other costs on innovators and personal liberty. Thus, the agency needs to reorient its focus toward becoming a risk educator. Their goal should be to help create a more fully-informed citizenry that is empowered with more and better information about relative risk trade-offs.
Overcoming the Opposition & Getting Consent Mechanisms Right
Such an approach (i.e., shifting the FDA’s mission from being primarily a risk regulator to becoming a risk educator) will encounter opposition from strident defenders and opponents of the FDA alike.
The defenders of the FDA and its traditional approach will continue to insist that people can never be trusted to make such decisions on their own, regardless of how much information they have at their disposal or how many warnings we might give them. The problem with that position is that it treats citizens like ignorant sheep and denies them the most basic of all human rights: The right to live a life of your own choosing and to make the ultimate determinations about your own health and welfare. And, again, blindly defending the old system isn’t wise because traditional command-and-control regulatory methods are increasingly impractical and incredibly costly to enforce.
Opponents of the FDA, by contrast, will insist that the agency can’t even be trusted to provide us with good information for us to make these decisions on our own. Additionally, critics will likely argue that the agency might give us the wrong information or try to “nudge” us in certain directions. I share some of those concerns, but I am willing to live with that possibility so long as we are moving toward a world in which that is the only real power that the FDA possess over me and my fellow citizens. Because if all the agency is doing is providing us with information about risk trade-offs, then at least we still remain free to seek out alternative information from other experts and then choose our own courses of action.
The tricky issue here is getting consent mechanisms right. In fact, it’s the lynchpin of the new regime I am suggesting. In other words, even if we could agree that a more fully-informed citizenry should be left free to make these decisions on their own, we need to make sure that those individuals have provided clear and informed consent to the parties they might need to contract with when seeking alternative treatments. That’s particularly essential in a litigious society like America, where the threat of liability always looms large over doctors, nurses, hospital, insurers, and medical innovators. Those parties will only be willing to go along with an expanded “right to try” regime if they can be assured they won’t be held to blame when citizens make controversial choices that they advised them against, or at least clearly laid out all the potential risks and other alternatives at their disposal. This will require not only an evolution of statutory law and regulatory standards, but also of the common law and insurance norms.
Once we get all that figured out—and it will, no doubt, take some time—we’ll be on our way to a better world where the idea of having a “right to try” is the norm instead of the exception.
(My thanks to Adam Marcus for commenting on a draft of this essay. For more general background on 3D printing, see his excellent 2011 primer here, “3D Printing: The Future is Here.”)