Tech Policy Threat Matrix

by on September 24, 2015 · 1 comment

On the whiteboard that hangs in my office, I have a giant matrix of technology policy issues and the various policy “threat vectors” that might end up driving regulation of particular technologies or sectors. Along with my colleagues at the Mercatus Center’s Technology Policy Program, we constantly revise this list of policy priorities and simultaneously make an (obviously quite subjective) attempt to put some weights on the potential policy severity associated with each threat of intervention. The matrix looks like this: [Sorry about the small fonts. You can click on the image to make it easier to see.]


Tech Policy Issue Matrix 2015

I use 5 general policy concerns when considering the likelihood of regulatory intervention in any given area. Those policy concerns are:

  1. privacy (reputation issues, fear of “profiling” & “discrimination,” amorphous psychological / cognitive harms);
  2. safety (health & physical safety or, alternatively, child safety and speech / cultural concerns);
  3. security (hacking, cybersecurity, law enforcement issues);
  4. economic disruption (automation, job dislocation, sectoral disruptions); and,
  5. intellectual property (copyright and patent issues).

I realize that some of these five categories could be sub-divided and refined. I also understand that these five groupings may not encapsulate the full range of potential policy issues out there, but I’ve tried to avoid having too many categories to keep this as conceptually tidy as is possible. However, I might need to add a separate category for civil rights and disabilities-related policy issues eventually. Likewise, “psychological considerations” might deserve its own category because they do not necessarily perfectly fit into either the privacy or safety buckets right now, even though that’s where I have them currently. For example, some privacy activists call for regulation of “big data” and large databases based on fears about how all that data collection makes people feel about themselves. I consider that a privacy-related concern now, but you could imagine that being in a separate category. Meanwhile, there’s long been calls to regulate various types of media content (music, movies, video games, online porn, etc) based on the psychological impact they have on children. Those “media effects” theories have always been considered a child safety issue, which is where I currently have them slotted, but they could probably be its own category that also included concerns about distraction and addiction (which could come to haunt VR technologies in the future).

Anyway, my colleagues and I use this current matrix to help us determine what we should be paying more attention to and what sort of scholarly outputs are needed to address regulatory threats on each front. Generally speaking, this is the portfolio of issues I try to stay on top of full-time at Mercatus as part of our ongoing “Permissionless Innovation” project.

Several people who have seen that matrix in my office tell me I should do something more with it, but I’m not really sure what that something would be. In any event, I thought it might make sense to post it here to give others a feel for the current set of emerging tech policy issues that interest us at Mercatus. I will try to upload new versions of the matrix as that giant whiteboard in my office morphs over time and the list of technologies and regulatory threats changes or grows.

Incidentally, I am often asked to explain the relative weights I’ve assigned to each potential regulatory threat, so I will try to justify some of those rankings here briefly. (Again, it’s all quite subjective and I’m always open to hearing the case for tweaking the rankings.)

  • Big Data / Online Marketing / the Internet of Things (IoT): Privacy is the #1 policy threat for these sectors. From a public policy perspective, what unifies these technologies is a growing concern about how expanding private sector data collection efforts could affect our privacy or reputations. We’ve already seen a flurry of legislative and regulatory activity here in the U.S. aimed at placing restrictions on data collection or use. And it goes without saying that other countries, especially in Europe, already impose a wide variety of controls on data collection in the name of privacy protection. There also exists a variety of closely-related security concerns here. But the rise of IoT technologies have introduced safety concerns into the mix in a major way, too. That’s especially true because of the large number of Big Data services and IoT devices that are health and medical related.  Taken together, this is the issue set I spend the majority of my time covering because the privacy and security implications of a data-driven economy already occupies the attention of countless regulatory activists and public policymakers across the globe. I think that will continue to be the case for many years to come.
  • Robotics: Safety concerns tend to be the biggest driver of calls for regulation of robotic and autonomous technology. For example, new laws and regulations are already being proposed for driverless cars based on fears about the hacking of connected vehicles. And commercial drones attract policy attention based on safety-related concerns such as whether a drone could strike an airplane, or even just fall on our heads. Proposals have been floated to mandate the equivalent of DRM for drones, which would force drone innovators to embed federally-approved technological controls into their systems designating where they are allowed to fly. Even if most of these concerns are overstated or are currently being dealt with, we can expect more safety-related policy proposals for robotic tech in coming years.  Economic concerns would be a close second here due to the increasing worry that robots will eat all our jobs. At least so far, however, that concern has tended to be more of an academic nature rather than a public policy consideration. And it remains unclear what the policy prescription would be in this regard without becoming a neo-Luddite, “smash-the-machines” sort of proposal. That could change in coming years, however. It all depends on the labor market situation over time. Meanwhile, academics are floating the idea of a Federal Robotics Commission to provide greater policy “expertise” in the form of yet another technocratic Beltway bureaucracy.
  • Additive manufacturing / 3D printingSafety is probably the #1 concern here, although depending on what type of 3D-printed object we are talking about, it could be the case that intellectual property concerns will be a bigger driver of calls for regulatory intervention. A lot of the policy-related concerns around 3D printing today are being driven by worries over things like 3D-printed guns. That’s mostly a safety concern, of course. But it we are talking about the replication of branded commercial objects (3D-printed toys or other things, for example), then IP tends to be the bigger concern. The question of product liability also looms large here and it remains unclear how claims might be sorted out when there are fewer large, deep-pocketed intermediaries to go after in a world of decentralized production. Hopefully, those liability norms will be left to the courts and common law to sort out over time, but I wouldn’t be surprised to see more calls for preemptive legislative interventions here in both directions: i.e., some will call legislators to impose greater liability on certain parties while others will push to immunize intermediaries from punishing forms of liability for the downstream actions of others (like a Sec. 230 norm for 3D printing).
  • Medical tech innovation: It goes without saying that traditional safety concerns will drive policy for advanced medical technologies, just as they have for earlier drugs, devices, and treatments. As software continues to “eat the world” and invade the world of health and medicine, regulators are increasingly going to be trying to figure out how to pigeonhole new technologies into old regulatory constructs. That’s why I have been watching how the FDA continues to deal with 3D-printed prosthetics and mobile medical apps on our smartphones. Eventually, the continuing decentralized democratization of 3D printing (driven by rapidly falling costs) will collide with old medical device regulatory realities and a century’s worth of FDA command-and-control style regulation. Oh my, what a fight that will be! And then chemical printers will become more widespread and this issue will get even more intense. The policy fight here is even more interesting because of all the thorny ethical issues pertaining to the rise of embeddable technology, biohacking, and genome innovation. I have a feeling that my policy portfolio will shift rapidly in this direction in coming years as the modern info-tech revolution spreads to the world of medicine and health. I already have two new papers coming out on these issues in the next few weeks.
  • Sharing economyEconomic disruption is clearly the big policy issue here. Specifically, many policymakers and incumbent industries aren’t very happy about new entrants coming into their sectors and offering consumers services without strictly complying with traditional regulations. But safety issues often pop up in these debates when regulators or advocates claim we can’t trust sharing economy operators. What’s particularly interesting about this space is how these policy battles are playing out at almost every level of government: federal, state, local, and international. At least thus far, sharing economy innovators tend to be winning most of those battles. But the fight continues.
  • Crypto & Bitcoin: I think safety would probably be the biggest issue here, in the sense that policymakers fear a world of unregulated crypto and decentralized blockchain applications are a world in which the “bad guys” will be able to use those technologies to harm the public in some fashion. We’ve heard this all before, of course, but (going all the way back to the Clipper Chip wars) you can always bank on law enforcement officials resorting to Chicken Little claims about terrorists and child predators thriving in a world of unregulated crypto. In many ways, this is the most important of all these policy fights because if the government can regulate crypto and blockchain technologies, it severely undermines the fabric of almost all the other technologies and platforms discussed herein. This is why the current debate over government-mandated “backdoors” is so important; it has profound ramifications for every other tech regulation debate that follows.
  • Immersive Tech (VR and augmented reality): This is an amorphous and evolving area that I am getting increasingly interested in, but the policy issues here have yet to come into clear focus. However, when Google Glass was launched, there was a brief technopanic of sorts over its privacy and security ramifications. Those concerns have subsided a bit as Google Glass has seemingly faded away (probably because of its high price point more than because of its privacy concerns), but I suspect that future iterations of augmented reality technologies will raise similar concerns. That will especially be true as more sophisticated biometric (and facial recognition) capabilities are integrated into them. Academics are already wondering how to enforce “notice and consent” privacy norms and rules in a world where everyone is wearing miniature body cams and heads-up displays in their sunglasses. I’m not sure it’s even possible, but that debate will continue and include all sorts of calls for technological controls. OK, that’s augmented reality, but what about virtual reality technologies? I think safety concerns could drive some policy proposals as critics grow concerned about the psychological implications of people (especially kids) spending more and more time in immersive virtual worlds. In that sense, we might see a replay of the earlier debate over violent video games and/or video game addition. But it remains to be seen.

Incidentally, I use this matrix and provide more context to it in my big presentation on “Permissionless Innovation & the Clash of Visions over Emerging Technologies.” [It’s embedded below.] And I discuss most of these issues in more detail in my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological FreedomI am in the process of finishing up the second edition of that book and will be expanding the case studies about the issues discussed above. Finally, I discussed many of these policy threats during my recent appearance on the Andreessen Horowitz podcast.

Update 10/2/15: For another take on various new technology trends and the potential policy issues they raise, check out this report from the World Economic Forum, Deep Shift: Technology Tipping Points and Societal Impact. The WEF report identifies 21 technology “shifts” and then groups them into six “mega-trend” categories. Almost all these issues are on my matrix above, but the WEF report provides some nice additional context on why each technology trend will be so disruptive.

Previous post:

Next post: