The race for artificial intelligence (AI) supremacy is on with governments across the globe looking to take the lead in the next great technological revolution. As they did before during the internet era, the US and Europe are once again squaring off with competing policy frameworks.
In early January, the Trump Administration announced a new light-touch regulatory framework and then followed up with a proposed doubling of federal R&D spending on AI and quantum computing. This week, the European Union Commission issued a major policy framework for AI technologies and billed it as “a European approach to excellence and trust.”
It seems the EU basically wants to have its cake and eat it too by marrying up an ambitious industrial policy with a precautionary regulatory regime. We’ve seen this show before. Europe is doubling down on the same policy regime it used for the internet and digital commerce. It did not work out well for the continent then, and there are reasons to think it will backfire on them again for AI technologies.
An Ambitious Industrial Policy Vision
The new EU framework includes a lot of catchphrases and proposals that are an industrial policy lover’s dream. In an attempt to create “an ecosystem of excellence” and ensure the “human-centric development if AI,” it identifies a variety of existing or new industrial planning efforts, including: Digital Innovation Hubs, Enterprise Resource Planning, the Digital Europe Programme, the Key Digital Technology Joint Undertaking, and broad-based public private partnerships. This is all part of an official “Coordinated Plan” prepared together with the Member States “to foster the development and use of AI in Europe.”
To accomplish that, the Commission says it will “facilitate the creation of excellence and testing centres” that will “concentrate in sectors where Europe has the potential to become a global champion.” The Commission also wants to give special consideration to growing small and mid-size enterprises (SMEs) is establishing these plans.
Again, it’s an ambitious industrial policy vision, and one that will be accompanied by a wide variety of (yet-to-be-determined) regulatory enactments to shape the development and use of AI. But if that approach really works, why aren’t European digital companies global leaders today? Instead, firms based mostly in the US have risen to become household names across the globe. Regulation had an influence on that result because American firms enjoyed a policy regime that was rooted in “permissionless innovation,” which generally allows experimentation by default and addresses concerns by using more flexible, ex post remedies. By contrast, Europe’s internet policy approach was rooted in the precautionary principle, or the notion that innovation is essentially guilty until proven innocent. New technologies are to be subjected to prior constraints—or what the new European Commission white paper calls “prior conformity assessments”—before being allow into the wild.
Precautionary Regulation Dominates
Despite losing that last round of the innovation wars, the new EU white paper makes it clear that Europe will keep using a precautionary approach. What does that mean for AI regulation? The problem here begins with defining what is a “high-risk” AI application requiring prior restraints. The white paper defines it in a somewhat circular fashion, saying that, “an AI application should be considered high-risk where…(it) is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur” and is “used in such a manner that significant risks are likely to arise.” Instead of providing legal certainty, this definition clarifies almost nothing and will require future regulatory inquires to determine the full scope and nature of AI controls.
There’s also a lot of talk in the proposal about preemptively addressing “risks for fundamental rights,” which is understandable. AI innovations can raise various safety, security, and privacy concerns that deserve to be taken seriously. But what about the risk of not having access to important AI innovations at all? What about the risk of losing out on life-enriching—and in many cases life-saving—innovations because, instead of “building trust,” the regulatory regime builds the exact opposite: fear of innovating.
Entrepreneurs and investors respond to incentives. Before building or investing in a new technology, they want to know how long it will take to get that good or service launched—assuming they can get approval at all. Every innovator and investor factors such political risk into their business plans. When the potential costs of product launch overwhelm the likely benefits, they will abandon innovative efforts or look to engage in them elsewhere.
The EU says “the race for global leadership is ongoing,” and claims that, “Europe offers significant potential, knowledge and expertise” through its efforts to make the continent an AI innovation hub. Indeed, some of the best AI researchers are in Europe, and there are plenty of brilliant people brimming with entrepreneurial enthusiasm about creating world-class AI applications. But all that knowledge and enthusiasm do not matter much if the regulatory deck is stacked against innovation from the start.
And Even More Expansive Regulation Down the Road
Beyond the precautionary approach in that document, the EU’s accompanying white paper on safety and liability implications of AI leaves open the possibility of an expansion in preemptive regulatory requirements. “Additional obligations may be needed for manufacturers to ensure that they provide features to prevent the upload of software having an impact on safety during the lifetime of the AI products,” the document notes. Moreover, if an ongoing AI software update “modifies substantially the product in which it is downloaded, the entire product might be considered as a new product and compliance with the relevant safety product legislation must be reassessed at the time the modification is performed.”
That sort of regulatory regime may sound quite sensible at first blush. In practice, however, it means that every conceivable tweak to an algorithm requires costly and complex regulatory approval. If traditional computer software had required regulatory approval before any new modifications could be made, most consumers would still be stuck with an aol.com email address and Windows 95 as an operating system.
What the European Commission proves with its new AI policy framework is that it is easy to talk a big game about planning for an innovative future, but it is an entirely different thing to actually bring one about. The European approach will have clear competitive effects, or more specifically, anti-competitive effects. As is already the case with the EU’s regulatory approach to the data economy and GDPR in particular, regulatory compliance costs continue to skyrocket and small and mid-size enterprises struggle to cope. This means that only firms operating the largest digital platforms are able to shoulder these burdens, leaving consumers without as many competitive, low-cost choices as they might otherwise enjoy. Not even generous government support for SMEs will be able to counter-balance the costly entry barriers associated with over-regulation.
Solidifying Market Power of Existing Giants?
This is why it is so ironic how worried the EU is about the market power of Google, Facebook and other US-based tech giants: the regulatory burden now helps those firms maintain their market dominance. Over-regulation by the EU undermined both home-grown and international investment and competition that might challenge those existing players. With each addition layer of AI regulation that now gets piled on top of the Europe’s existing regulatory burden, the prospects for creative destruction decrease, as do the chances for life-enriching innovations to ever make it to consumers.
While the European Commission will, no doubt, insist that they are implementing this new AI regime with the very best of intentions in mind, there is no escaping the fact that regulation involves complex trade-offs and unforeseeable consequences. The consequences in this case are likely a bit easier to predict, however: By smothering new AI applications in layers of red tape, we can expect fewer innovations and less competition.
Despite all the talk of boosting SMEs, perhaps the EU will eventually become more like China and unabashedly support larger home-grown firms to make sure they are part of the global AI race. China has already made waves on this front with its 2017 “New Generation Artificial Intelligence Development Plan,” an audacious industrial policy plan which seeks “to build China’s first-mover advantage in the development of AI [and] to accelerate the construction of an innovative nation and global power in science and technology.” The document is as much a manifesto about geopolitical power as it is about technological governance. And it does not try to hide China’s authoritarian impulse to meticulously plan every facet of daily life under the auspices of promoting global technological leadership. China’s AI manifesto even concludes with a section on “public opinion guidance” that creepily insists the country will, “Fully use all kinds of traditional media and new media to quickly propagate new progress and new achievements in AI, to let the healthy development of AI become a consensus in all of society, and muster the vigor of all of society to participate in and support the development of AI.”
The new European AI industrial policy framework does not go as far as China’s, not only because the continent is obviously more open and democratic by nature, but also because the EU is a collection of many countries and cultures that will never be able to speak as coherently and forcefully with one voice on all technological governance matters. In fact, the EU’s new governance framework explicitly leaves room for more tailored AI regulation by individual member states.
Conclusion
This leaves Europe stuck between the polar opposites of China and the US when it comes to AI governance. China’s meticulously detailed, highly centralized, state-driven approach stands in stark contrast to the more bottom-up, adaptive American approach which insists that regulators, “must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”
The US approach also leans heavily on “soft law,” or informal governance mechanisms that are not as burdensome as precautionary regulatory controls. Soft law can include a wide variety of tools and methods for addressing policy concerns, including multistakeholder initiatives, best practices and standards, agency workshops and guidance documents, educational efforts, and much more. These are the governance tools the dominated for the internet and digital platforms for that past twenty years in the US, and they will likely continue to be the primary governance mechanisms for artificial intelligence, robotics, the internet of things, and other emerging tech sectors.
The EU probably thinks it has found the Goldilocks formula and gotten AI policy just right by falling between China and the US on the governance spectrum. It is more likely, however, that European policymakers will be unable to resist the urge to over-plan and micro-manage AI markets until they are once again left wondering how they got stuck trying to regulate market leaders that are headquartered oceans away from them. With the US once again adopting a more flexible approach, we could see a replay of the Web Wars, with innovators and investors putting their efforts behind AI launches in the US instead of Europe. Meanwhile, China will likely attract far more global venture capital for AI and robotics launches than they did for digital platforms. This could really put the squeeze on Europe.
Only time will tell. But, to paraphrase Yoda, when it comes to global artificial intelligence governance, one thing is clear: Begun the AI war has.