Trump’s AI Framework & the Future of Emerging Tech Governance

by on January 8, 2020 · 0 comments

This week, the Trump Administration proposed a new policy framework for artificial intelligence (AI) technologies that attempts to balance the need for continued innovation with a set of principles to address concerns about new AI services and applications. This represents an important moment in the history of emerging technology governance as it creates a policy vision for AI that is generally consistent with earlier innovation governance frameworks established by previous administrations.

Generally speaking, the Trump governance vision for AI encourages regulatory humility and patience in the face of an uncertain technological future. However, the framework also endorses a combination of “hard” and “soft” law mechanisms to address policy concerns that have already been raised about developing or predicted AI innovations.

AI promises to revolutionize almost every sector of the economy and can potentially benefit our lives in numerous ways. But AI applications also raise a number of policy concerns, specifically regarding safety or fairness. On the safety front, for example, some are concerned about the AI systems that control drones, driverless cars, robots, and other autonomous systems. When it comes to fairness considerations, critics worry about “bias” in algorithmic systems that could deny people jobs, loans, or health care, among other things.

These concerns deserve serious consideration and some level of policy guidance or else the public may never come to trust AI systems, especially if the worst of those fears materialize as AI technologies spread. But how policy is formulated and imposed matters profoundly. A heavy-handed, top-down regulatory regime could undermine AI’s potential to improve lives and strengthen the economy. Accordingly, a flexible governance framework is needed and the administration’s new guidelines for AI regulation do a reasonably good job striking that balance.

Background

Last February, the White House issued Executive Order 13859, on “Maintaining American Leadership in Artificial Intelligence.” The Order announced the creation of the “American AI Initiative,” an effort to “focus the resources of the Federal government to develop AI.” It prioritized investments in AI-focused research and development (R&D), building a workforce ready for the AI era, international engagement on AI priorities, and the establishment governance standards for AI systems to “help Federal regulatory agencies develop and maintain approaches for the safe and trustworthy creation and adoption of new AI technologies.”

Regarding that last objective, Order 13589 required the Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP) to develop a framework and set of principles for federal agencies to follow when considering the development of regulatory and non‑regulatory approaches for AI. Importantly, the Order also specified that the framework should seek to “advance American innovation” and “reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.”

That resulted in the memorandum sent to heads of federal departments and agencies this week entitled, “Guidance for Regulation of Artificial Intelligence Applications” (hereinafter AI Guidance). The draft version of the AI Guidance specifies that “federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” More specifically:

“Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits. Where AI entails risk, agencies should consider the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace.”

But the AI Guidance is certainly not a call for comprehensive deregulation or the abandonment of all AI federal oversight. The memorandum’s very title reflects an understanding that existing laws and agency rules will continue to play a role in guiding the development of AI, machine-learning, and autonomous systems.

Accordingly, and consistent with past executive orders and OMB regulatory guidance documents for federal agencies, the AI Guidance establishes a set of ten principles that agencies must take into consideration when considering AI policy:

  1. Public trust in AI: Requiring that “the government’s regulatory and non-regulatory approaches to AI promote reliable, robust, and trustworthy AI applications, which will contribute to public trust in AI.”
  2. Public participation: Agencies must provide “ample opportunities for the public to provide information and participate in all stages of the rulemaking process.”
  3. Scientific integrity and information quality: Agencies should “leverage scientific and technical information and processes” to build trust and ensure data quality and transparency.
  4. Risk assessment and management: Acknowledging that “all activities involve tradeoffs,” the AI Guidance requires that “a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.”
  5. Benefits and costs: As part of those risk assessments, agencies must “carefully consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of AI applications. Such consideration will include the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace, whether implementing AI will change the type of errors created by the system, as well as comparison to the degree of risk tolerated in other existing ones.”
  6. Flexibility: OMB encourages agencies to “pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications.”
  7. Fairness and non-discrimination: Acknowledging that “in some instances, introduce real-world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI,” the AI Guidance requires agencies to consider “issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue.”
  8. Disclosure and transparency: Agencies are encouraged to consider how greater “transparency and disclosure can increase public trust and confidence in AI applications.”
  9. Safety and security: Agencies are required to “promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.”
  10. Interagency coordination: The guidance makes it clear that a “coherent and whole-of-government approach to AI oversight requires interagency coordination.”

Soft Law Ascends

Importantly, the AI Guidance also encourages agencies to be open to “non-regulatory approaches to AI” governance and specifies three particular models:

  • Sector-specific policy guidance or frameworks: OSTP writes that “agencies should consider using any existing statutory authority to issue non-regulatory policy statements, guidance, or testing and deployment frameworks, as a means of encouraging AI innovation in that sector.” The memorandum also notes that this can include “work done in collaboration with industry, such as development of playbooks and voluntary incentive frameworks.”
  • Pilot programs and experiments: The document encourages the use of “pilot programs that provide safe harbors for specific AI applications” which “could produce useful data to inform future rulemaking and non-regulatory approaches.”
  • Voluntary consensus standards: Before regulating, the AI Guidance encourages agencies to consider how voluntary consensus standards, assessment programs, and compliance programs might be used to address policy concerns.

These represent “soft law” approaches to technological governance and they are becoming all the rage in technology policy discussions today. Soft law mechanisms are informal, collaborative, and constantly evolving governance efforts. While not formerly binding like “hard law” rules and regulations, soft law efforts nonetheless create a set of expectations about sensible development and use of technologies. Soft law can include multistakeholder initiatives, best practices and standards, agency workshops and guidance documents, educational efforts, and much more.

Soft law has become the dominant governance approach for emerging technologies because it is often better able to address the “pacing problem,” which refers to the growing gap between the rate of technological innovation and policymakers’ ability to keep up with it. As I have previously noted, the pacing problem is “becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.”

Not only do traditional legislative and regulatory hard law systems struggle to keep up with fast-paced technological changes, but oftentimes those older mechanisms are just too rigid and unsuited for new sectors and developments. That is definitely the case for AI, which is multi-dimensional in nature and even defies easy definition. Soft law offers a more flexible, adaptive approach to learning on the fly and cobbling together principles and policies that can address new policy concerns as they develop in specific contexts, without derailing potentially important innovations.

Building on Past Governance Frameworks

In this sense, the Trump administration’s AI Guidance borrows from past policy frameworks by marrying up a desire to promote an exciting new set of emerging technologies alongside the need for reasonable but flexible oversight and governance mechanisms. At a high level, the AI Guidance builds on many of the same principles that motivated the Clinton administration’s Framework for Global Electronic Commerce, a statement of principles and policy objectives for the then-emerging Internet. The document, which was issued in July 1997, said that “governments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”

The Framework was a clean break from the top-down regulatory paradigm that had previously governed traditional communications and media technologies. Clinton’s Framework insisted that, to the extent government intervention was needed at all, “its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.” The use of soft law and multistakeholder models was a key component of this vision, and those more flexible governance approaches were tapped by the subsequent administrations to address emerging tech policy concerns.

For example, the Obama administration considerably expanded the use of multistakeholder mechanisms and other soft law tools in response to the need of oversight of fast-moving technologies. The Obama administration had many different policy governance efforts underway for specific AI technologies and concerns, including workshops and multistakeholder efforts focused on the safety, security, and privacy-related issues surrounding “big data” systems, online advertising, connected cars, drones, and more.

Whereas the Obama administration was deeper in the weeds of the policy issues associated with specific AI and machine-learning applications, the Trump administration has sought to both build on those focused efforts while also stepping back to consider AI governance at the 30,000-foot level. In essence, the AI Guidance combines some of the aspirational elements found in the Clinton Framework alongside the Obama administration’s more targeted approach to consider specific policy concerns across many different sectors and technologies.

Trump’s AI Guidance adds an element of formality to this process regarding how federal agencies should address AI developments and formulate potential policy responses. It does so by counseling humility and even potential forbearance until all the facts are in. “Fostering innovation and growth through forbearing from new regulations may be appropriate,” the memorandum says. Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary.” Again, this is very much consistent with more general regulatory guidance issued by every administration since President Reagan was in office.

Flexible, Adaptive Governance is Key

The AI Guidance foreshadows the future of not only AI governance but the governance of many other emerging technologies. Hard law will continue to provide a backstop and have a role in guiding technological developments. Toward that end, efforts like the new AI Guidance are important because it represents an effort to “regulate the regulators” by placing some ground rules on how they go about applying old law to new developments.

But soft law governance is where the real action is at, both for AI and almost all emerging technologies today. The Trump AI Guidance reflects the extent to which soft law has become the dominant governance paradigm for modern tech sectors. As my colleagues Jennifer Huddleston and Trace Mitchell have noted, soft law is already effectively the law of the land for driverless cars, for example. After years of congressional wrangling over a federal autonomous vehicle regulatory framework—one that has widespread bipartisan support, no less—we still do not have a law on the books. Instead, the Department of Transportation has been cobbling together informal “rules of the road” through informal guidance documents that have been “versioned” as if they were computer software (i.e., Version 1.0, 2.0, 3.0). Version 4.0 of the DoT guidance for automated vehicles was just released this week.

That is the same approach that the National Institute of Standards and Technology (NIST) has taken with the privacy guidelines it developed. NIST’s Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management is also versioned like software. And many other federal agencies, especially the Federal Trade Commission, have tapped a wide variety of soft law tools—such as agency workshops and workshop reports that recommended privacy best practices for various technologies. Meanwhile, the National Telecommunications and Information Administration (NTIA) has used multistakeholder processes to address privacy concerns surrounding a wide range of technologies, including drones and facial recognition. NIST, FTC, and NTIA have undertaken these informal governance efforts because, despite over a decade of debate, Congress still has not advanced comprehensive federal privacy legislation. For better or worse, soft law has filled that governance gap.

Addressing Likely Objections from Left & Right

Many people of varying ideological dispositions will object to the growing role of soft law as the primary governance tool for emerging technology policy. Some conservatives will cringe at the sound of giving regulators greater leeway to address amorphous policy concerns, fearing that it will result in unconstrained exercises of unaccountable, extra-constitutional power.

Some of those concerns are valid, but they fail to account for the fact that the prospects for agency downsizing or deregulation they prefer are extremely limited. Practically speaking, the administrative state isn’t going anywhere. In some cases, agencies can actually do some real good by encouraging innovators to think about how to “bake-in” sensible best practices to preemptively address concerns about the privacy, safety, security, and fairness of various AI systems. Better those concerns be addressed in more flexible, adaptive fashion than by a heavy-handed, overly-rigid regulatory approach. Soft law offers that possibility, even if legitimate concerns remain about agency accountability and transparency.

Many to the left of center will be critical of this governance approach as well, but on very different grounds. As Associated Press reporter Matt O’Brien notes, “the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment.”

These concerns actually are addressed in several of the OSTP’s ten principles, including those which stress the need for fairness and non-discrimination, information quality, public participation, disclosure and transparency, and safety and security. Yet many on the left will claim these principles merely pay lip service to these values and that what is really needed is a full-blown regulatory regime and some sort of corresponding new federal AI agency, which would preemptively determine which AI technologies would be allowed into the wild.

Already, an Algorithmic Accountability Act was introduced in Congress last year that would ask the FTC to take a more active role in policing “inaccurate, unfair, biased, or discriminatory decisions impacting consumers” that may have resulted from “automated decision systems.” Meanwhile, some academics have called for the creation of a Federal Robotics Commission or a National Algorithmic Technology Safety Administration to preemptively oversee new AI developments.

The problem with overly-precautionary regulation of that sort could potentially unduly limit AI innovation and the many benefits it entails. There may be some AI applications that pose serious and immediate risks to humanity and which require preemptive restraints on their development and use. Autonomous military and law enforcement applications are the most obvious examples. But most AI applications do not rise to that same level of regulatory concern, and other governance approaches are required to balance the use and misuse of them. This is why a more open and flexible governance approach is needed. Moreover, the old regulatory system just cannot keep up anymore, and it is ill-suited to address most policy concerns in a timely or efficient fashion.

Cristie Ford, and advocate of greater regulatory oversight for fintech, notes in her latest book that the problem with “old-style Welfare State regulation” is that it is “a clumsy, blunt instrument for achieving regulatory objectives” due to its reliance upon “one-size-fits-all mandates, prohibitions, and penalties.” Ford acknowledges what many other regulatory advocates are reluctant to admit:  public policies toward fast-paced technology sectors can no longer be governed effectively using the Analog Era’s top-down, command-and-control regulatory processes. Far too many federal agencies rely on a “build-and-freeze model” of regulation that puts rules in stone to deal with one sets of issues one day, but then either fails to eliminate them later when they become obsolete or to reform those rules to bring them in line with new social, economic, and technical realities.

If we hope to encourage continued innovation in sectors that could produce profoundly important, life-enriching technologies, America’s regulatory approach for AI and emerging technology needs to move away from “build-and-freeze” and toward “build-and-adapt.” Regulation is still needed, but the old regulatory toolkit is badly broken. For better or worse, soft law is going to fill the resulting governance gap, regardless of objections from some on the left or the right. Pragmatic policymaking is going to carry the day for emerging technology governance.

Conclusion

The Trump Administration AI Guidance represents a continuation and extension of this trend toward more flexible, adaptive governance approaches for emerging technologies. It offers a pragmatic vision that builds on the policies and paradigms of the past, while also encouraging fresh thinking about how best to balance the need for continued innovation alongside the various concerns about disruptive technological change.

There are many challenging issues that lie ahead and the new AI Guidance cannot provide bright-line answers to all the hypothetical questions that people want answered today. No one possesses a crystal ball that will allow them to forecast the technological future. Only ongoing trial-and-error experimentation and policy improvisation will allow us to find sensible solutions. A policy approach rooted in humility, flexibility, and forbearance will help ensure that America’s regulatory policies continue to promote both innovation and the public good.

Previous post:

Next post: