Governance & Soft Law – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Tue, 20 Sep 2022 19:42:00 +0000 en-US hourly 1 6772528 6 Ways Conservatives Betray Their First Principles with Online Child Safety Regulations https://techliberation.com/2022/09/20/6-ways-conservatives-betray-their-first-principles-with-online-child-safety-regulations/ https://techliberation.com/2022/09/20/6-ways-conservatives-betray-their-first-principles-with-online-child-safety-regulations/#comments Tue, 20 Sep 2022 19:42:00 +0000 https://techliberation.com/?p=77048

I’ve been floating around in conservative policy circles for 30 years and I have spent much of that time covering media policy and child safety issues. My time in conservative circles began in 1992 with a 9-year stint at the Heritage Foundation, where I launched the organization’s policy efforts on media regulation, the Internet, and digital technology. Meanwhile, my work on child safety has spanned 4 think tanks, multiple blue ribbon child safety commissions, countless essays, dozens of filings and testimonies, and even a multi-edition book.

During this three-decade run, I’ve tried my hardest to find balanced ways of addressing some of the legitimate concerns that many conservatives have about kids, media content, and online safety issues. Raising kids is the hardest job in the world. My daughter and son are now off at college, but the last twenty years of helping them figure out how to navigate the world and all the challenges it poses was filled with difficulties. This was especially true because my daughter and son faced completely different challenges when it came to media content and online interactions. Simply put, there is no one-size-fits-all playbook when it comes to raising kids or addressing concerns about healthy media interactions.

Something Must Be Done!

My personal approach, as I summarized in my book on these issues, was to first and foremost do everything in my power to (a) keep an open mind about new media content and platforms, and (b) ensure an open line of ongoing communication with my kids about the issues they might be facing. Shutting down conversation or calling for others to come in and save the day were the worst two options, in my opinion. As I summarized in my book, “At the end of the day, there is simply no substitute for talking to our children in an open, loving, and understanding fashion about the realities of-this world, including the more distasteful bits.” This was my Parental Prime Directive, if you will. I just always wanted to make sure that my kids felt like they could talk to me about their issues, no matter how varied, horrible, or heart-breaking those problems might be.

When talking with other parents through the years, I’ve heard about their own unique concerns and struggles. Every family faces different challenges because no two kids or situations are alike. Moreover, the challenges can feel overwhelming in our modern world of information abundance, which is flush with ubiquitous communications and media options. Sometimes these parental frustrations can fester and grow into a sort of rage until you finally hear folks utter that famous phrase: Something must be done! And that “something” is often some sort of government regulation “for the children.”

Again, I get it. When all your best efforts to help or protect your kids don’t seem to work according to plan, it’s only natural to call for help. But there are very serious problems associated with calling on government for that help. When legislators and regulators are asked to play the role of National Nanny, it comes with all the same baggage that accompanies many other efforts by the government to intervene in our lives or control what people or organizations can say or do.

Conservative Contradictions

These are particularly sensitive issues for many conservatives, both because conservatives tend to have more heightened concerns about media content and online safety issues, and also because the steps they often recommend to address these issues can quickly come into conflict with their own first principles.

Let me run through six ways that support for media content controls and child safety regulations can sometimes run afoul of conservative principles.

1) It’s a rejection of personal responsibility

Again, I understand all too well how hard parenting can be. But that does not mean we should abdicate our parental responsibilities to the State. Conservatives have spent decades fighting government when it comes to broken schools and the supposed brainwashing many kids get in them. The rallying cry of conservatives has long been: Let us have a greater say in how we raise and educate our children because the State is failing us or betraying our values.

Thus, when conservatives suggest that the State should be making decisions for us as it pertains to anything the government says is a “child safety” issue, there is some serious cognitive dissonance going on there. In his humorous Devil’s Dictionary, Ambrose Bierce jokingly defined responsibility as, “A detachable burden easily shifted to the shoulders of God, Fate, Fortune, Luck or one’s neighbor. In the days of astrology it was customary to unload it upon a star.” For parental responsibility to actually mean something, it has to be more than a “detachable burden” that we unload upon government.

2) It’s an embrace of the administrative state & arbitrary rule by unelected bureaucrats

Beyond the classroom, conservatives have long been concerned about the specter of massive administrative agencies and armies of unelected bureaucrats controlling our lives from the shadows. I’ve spent decades working with conservative organizations and scholars trying to get the administrative state under some control to scale back its enormous power, arbitrary edicts, and costly burdens. Over-criminalization has become such a problem that, according to the Heritage Foundation, “regulatory offenses… have proliferated to the point that, literally, nobody knows how many federal criminal regulations exist today.” We’re all criminals of some sort in the eyes of the modern regulatory state.

Yet, when conservatives advocate the expansion of the administrative state through new “online safety” regulations, they are just making the over-criminalization problem worse, including by treating our own children as guilty parties for simply trying to access the primary media platforms of their generation and interact with their friends there. For example, calls to ban all teens from social media until they’re 18 would result in the most massive “forbidden fruit” nightmare in American history, with every teen suddenly becoming a criminal actor and working together to tunnel around bans using the same sort of VPNs and evasion technologies people in China and other repressive nations use to get around over-bearing speech policies. [See: “Again, We Should Not Ban All Teens from Social Media”]

Needless to say, all this regulation and bureaucratic empowerment would have massive negative externalities for online freedom more generally as the era of “permissionless innovation” is replaced by a new age of permission-slip regulation.

3) It’s a rejection of the First Amendment & free speech rights

Conservatives have spent many decades pushing for greater First Amendment-based freedoms as it pertains to religious liberty and or organizational/corporate speech issues. Thus, when conservatives seek to undermine free speech principles and jurisprudence in the name of child safety, it could undo everything conservatives have been fighting to accomplish in those other contexts.

Conservatives are understandably upset with some social media platforms for being too over-zealous with certain types of speech takedowns or de-platformings. But two wrongs don’t make a right, and they should not be calling on Big Government to be imposing its own editorial judgments in place of private actors. [See: “The Great Deplatforming of 2021“ and “When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer.“]

4) It’s a rejection of property rights and freedom more generally

Related to the previous two points, conservatives have long upheld the sanctity of property rights in many different contexts. This includes the property rights that private establishments enjoy under the Constitution to generally decide how to structure their operations, who they will do business with, and how they will do so. Private organizations and religious institutions possess not only free speech rights in this regard, but property and contractual rights, too.

But when it comes to “child safety” mandates, some conservatives would toss all this out the window and undermine those rights, replacing them with burdensome regulatory mandates that tell private parties how to conduct their affairs. Again, there’s a lot of cognitive dissonance going on here and it could have serious blowback for conservatives when the property / contractual rights of other people or organizations are undermined on similar grounds.

5) It’s an embrace of frivolous lawsuits & the trial lawyers that bring them

The last time I checked, trial lawyers were not exactly the most conservative-friendly constituency. For many decades, conservatives have looked to advance tort reform, limit junk science and frivolous lawsuits, and make sure that the courts don’t engage in excessive judicial activism.

Unfortunately, many of the child safety regulations being proposed today would empower the regulatory state and trial lawyers at the same time. Many of the bills being floated open the door to open-ended litigation and potentially punishing liability for private platforms — and not just against deep-pocketed “Big Tech” companies. The fact is, once conservatives open the litigation floodgates based on amorphous accusations of potential online safety harms, they will be empowering the tort bar (one of the biggest supporters of the Democratic Party, no less) to launch a legal jihad against any and every media platform out there. Good luck putting that genie back in the bottle once you unleash it.

6) It’s an embrace of the same moral panic arguments your parents leveled against you

How quickly we forget the accusations our own parents and others leveled against us as children. Remember when video games were going to make us a lost generation of murderous youth? Or when rap and rock-and-roll music were going to send us straight to hell? Today, those kids are all grown up and trying to tell us that they are fine but it’s this latest generation that is doomed. It’s just an endless generational cycle of moral panics. [See: “Why Do We Always Sell the Next Generation Short?” and “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics”] Today’s conservatives need to remember that they, too, were once kids and somehow muddled through to adulthood.

The “3-E” Approach Is the Better Answer

At this point, some of the people who’ve read this far are screaming at the screen: “So, are you saying we should just do nothing!?”

Absolutely not. But it is important that we consider less onerous and more practical ways to address these challenging issues without falling prey to Big Government gimmicks that would undermine other important principles. We should start by acknowledging that there are no easy fixes or silver-bullet solutions. The plain truth of the matter is that the best solutions here can seem messy and unsatisfying to many because they require enormous ongoing efforts to mentor and assist our kids at a far deeper level than some folks are comfortable with.

For example, it is just insanely uncomfortable to have to speak with your kids about online bullying or harassment, pornography, violence in movies and games, hate speech, and so on. And I haven’t even mentioned the hardest things to talk to kids about: The daily news of the real world: wars, violence, tragic accidents, famines, etc. Honestly, the hardest conversations I’ve had to have with my kids were those about school shootings. By comparison, many other discussions about online content and interactions were much easier. To the extent that we’re attempting to measure and address negative media affects, I firmly believe that there a few things in this world more horrifying to kids — or harder to talk with them about — than the first 10 minutes of what’s on cable news each hour of the day.

Regardless, whether we’re talking about the potential “harms” or mass media or online content, we cannot pretend there exists a simple solution to any of it. Here’s the better approach.

I recently authored a study for the American Enterprise Institute on, “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium.” It was my attempt to sketch out a flexible, pragmatic, bottom-up set of governance principles for modern technology platforms and issues. In that report, I noted how “[t]he First Amendment constitutes a particularly high barrier to the use of hard law in the United States,” and that court challenges were likely to continue to block many of the regulatory efforts being floated today, just as been the case countless times before in recent decades. Thus, we need to have backup approaches to online safety beyond one-size-fits-all regulatory Hail Mary passes.

I have described that backup plan as the “3-E” approach or “layered approach” to online safety:

  • Empowerment of parents: Parental controls cannot solve all the world’s problems. It’s better to view them as helpful speed bumps or emergency alerts for when things are going badly for your child. In the old days, we placed a lot of faith in filtering, and that still has a role along with other tools that help place some reasonable limits not only on content but also overall consumption. But the best types of parental empowerment are those that force conversations between parents and kids by allowing reasonable monitoring to happen that is scaled by age (as in more limits for younger kids until they are gradually relaxed over time). And other carrot-and-stick tools and approaches are incredibly useful in helping parents place smart limits on youth activity and overall consumption.
  • Education of youth: Education is the strategy with the most lasting impact for online safety. Education and digital literacy provide skills and wisdom that can last a lifetime. Specifically, education can help teach both kids (and adults!) how to behave in — or respond to — a wide variety of situations. Building resiliency and encouraging healthy interactions is the goal.
  • Enforcement of existing laws: There are many sensible and straightforward laws already in place that address more concrete types of harm and harassment. And we have lots of laws pertaining to fraud and unfair and deceptive practices. Sometimes these rules can be challenging (and time-consuming) to enforce, but they constitute an existing backstop that can handle most worst-case scenarios when other less-restrictive steps fall short. And we should certainly tap these existing remedies before advancing unworkable new regulatory regimes.

I noted in my AEI study that, between 2000 and 2010, six major online-safety task forces or blue-ribbon commissions were formed to study online-safety issues and consider what should be done to address them. Each of them recommended some variant of the “3-E” approach as they encouraged a variety of best practices, educational approaches, and technological-empowerment solutions to address various safety concerns. Self-regulatory codes, private content-rating systems, and a wide variety of different parental-control technologies all proliferated during this period. Many multi-stakeholder initiatives and other organizations were also formed to address governance issues collaboratively. There are countless groups doing important work on this front today, including my old friends at the Family Online Safety Institute (FOSI) among many others.

These organizations push for a layered approach to online safety and work closely with educators, child development experts, and other academics and activists to find workable solutions to new online safety challenges as they arise. Their work is never done, and at times it can feel overwhelming. But, again, it’s the nature of the task at hand. We all must work together to continuously devise new and better approaches to addressing these challenges, because they will be endless. But let’s please not expect that we can unload these responsibilities on government and expect regulators to somehow handle it for us.

Do the Ends Justify the Means When it Comes to Media & Content Control?

I could be wasting my breath here because I’ve been attempting to appeal to conservative principles that may be rapidly disappearing from the modern conservative movement. Donald Trump radically disrupted everything in American politics, but especially the Republican Party. Many so-called national conservatives now live by Trump’s central operating principle: The ends justify the means. The ends are “owning the libs” in any way possible. And “the libs” include not only anyone on the Left of the political spectrum, but even those individuals and institutions that Trumpian conservatives believe are “the enemy” and controlled by “liberal interests.” By their definition, this now includes virtually all large media and technology companies and platforms. Thus, when we turn to the means, it’s increasingly the case that just about anything goes — including many traditional conservative principles.

To see how far we’ve come, recall what President Ronald Reagan said 35 years ago when vetoing an effort to reinstate the Fairness Doctrine. “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and compe­tition that the First Amendment sought to guarantee,” he said. At the time, President Reagan was confronted with some of the same arguments we hear today about media being too biased or conservatives not getting a fair shake. But he called upon his fellow conservatives to reject the idea that Big Government was the solution to such problems.

Unfortunately, Mr. Trump and some of his most loyal followers and even some major conservative groups today have largely given up on this logic and instead embraced regulation. While Trumpian conservatives love to decry everyone they oppose as “communists,” ironically it is this same group that is embracing a sort of communications collectivism as it pertains to modern media control. In the Trumpian worldview, media and tech platforms are useful only to the extent they carry out the will of the party — or at least the man on top of it.

These national conservatives have made a horrible miscalculation. Feeling aggrieved by Big Tech “bias,” or just feeling overwhelmed by things they don’t like about online platforms, they’ve decided that two wrongs make a right. In reality, two political wrongs never make a right, but they almost always combine to make government a lot bigger and more powerful.

It’s an incredibly naïve gamble almost certainly destined to fail, but they should ask themselves what it means if it works. This endless ratcheting effect will result in comprehensive state control of most channels of communications and information dissemination. Is this a game that you really think you can play better than the Lefties?

I’ll close by returning to one of Reagan’s favorite jokes. He always used to say that, “The nine most terrifying words in the English language are: I’m from the government and I’m here to help.” I would suggest that an even scarier version of that line would be, “We’re from the government and we’re here to help you parent your kids.”

Don’t let it be you uttering that line.

______________

Additional Reading

· Adam Thierer, “Again, We Should Not Ban All Teens from Social Media

· Adam Thierer, “Why Do We Always Sell the Next Generation Short?”

· Adam Thierer, “The Classical Liberal Approach to Digital Media Free Speech Issues

· Adam Thierer, “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics

· Adam Thierer, “Left and right take aim at Big Tech — and the First Amendment

· Adam Thierer, “When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer

· Adam Thierer, “Ongoing Series: Moral Panics / Techno-Panics

· Adam Thierer, “No Goldilocks Formula for Content Moderation in Social Media or the Metaverse, But Algorithms Still Help

· Adam Thierer, “FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers

· Adam Thierer, “Conservatives & Common Carriage: Contradictions & Challenges

· Adam Thierer, “The Great Deplatforming of 2021

· Adam Thierer, “A Good Time to Re-Read Reagan’s Fairness Doctrine Veto

· Adam Thierer, “Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet

· Adam Thierer, “How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality

· Adam Thierer, “Sen. Hawley’s Moral Panic Over Social Media

· Adam Thierer, “The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’

· Adam Thierer, “The Surprising Ideological Origins of Trump’s Communications Collectivism

· Adam Thierer, Parental Controls & Online Child Protection: A Survey of Tools and Methods (2009).

]]>
https://techliberation.com/2022/09/20/6-ways-conservatives-betray-their-first-principles-with-online-child-safety-regulations/feed/ 2 77048
The Proper Governance Default for AI https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/ https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/#comments Thu, 26 May 2022 20:15:21 +0000 https://techliberation.com/?p=76994

[This is a draft of a section of a forthcoming study on “A Flexible Governance Framework for Artificial Intelligence,” which I hope to complete shortly. I welcome feedback. I have also cross-posted this essay at Medium.]

Debates about how to embed ethics and best practices into AI product design is where the question of public policy defaults becomes important. To the extent AI design becomes the subject of legal or regulatory decision-making, a choice must be made between two general approaches: the precautionary principle or the proactionary principle.[1] While there are many hybrid governance approaches in between these two poles, the crucial issue is whether the initial legal default for AI technologies will be set closer to the red light of the precautionary principle (i.e., permissioned innovation) or to the green light of the proactionary principle (i.e., (permissionless innovation). Each governance default will be discussed.

The Problem with the Precautionary Principle as the Policy Default for AI

The precautionary principle holds that innovations are to be curtailed or potentially even disallowed until the creators of those new technologies can prove that they will not cause any theoretical harms. The classic formulation of the precautionary principle can be found in the “Wingspan Statement,” which was formulated at an academic conference that took place at the Wingspread Conference Center in Wisconsin in 1998. It read: “Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”[2] There have been many reformulations of the precautionary principle over time but, as legal scholar Cass Sunstein has noted, “in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”[3] Put simply, under almost all varieties of the precautionary principle, innovation is treated as “guilty until proven innocent.”[4] We can also think of this as permissioned innovation.

The logic animating the precautionary principle reflects a well-intentioned desire to play it safe in the face of uncertainty. The problem lies in the way this instinct gets translated into law and regulation. Making the precautionary principle the public policy default for any given technology or sector has a strong bearing on how much innovation we can expect to flow from it. When trial-and-error experimentation is preemptively forbidden or discouraged by law, it can limit many of the positive outcomes that typically accompany efforts by people to be creative and entrepreneurial. This can, in turn, give rise to different risks for society in terms of forgone innovation, growth, and corresponding opportunities to improve human welfare in meaningful ways.

St. Thomas Aquinas once observed that if the sole goal of a captain were to preserve their ship, the captain would keep it in port forever. But that clearly is not the captain’s highest goal. Aquinas was making a simple but powerful point: There can be no reward without some effort and even some risk-taking. Ship captains brave the high seas because they are in search of a greater good, such as recognition, adventure, or income. Keeping ships in port forever would preserve their vessels, but at what cost?

Similarly, consider the wise words of Wilbur Wright, who pioneered human flight. Few people better understood the profound risks associated with entrepreneurial activities. After all, Wilbur and his brother were trying to figure out how to literally lift humans off the Earth. The dangers were real, but worth taking. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” Humans would have never taken to the skies if the Wright brothers had not gotten off the fence and taken the risks they did. Risk-taking drives innovation and, over the long-haul, improves our well-being.[5] Nothing ventured, nothing gained.

These lessons can be applied to public policy by considering what would happen if, in the name of safety, public officials told captains to never leave port or told aspiring pilots to never leave the ground. The opportunity cost of inaction can be hard to quantify, but it should be clear that if we organized our entire society around a rigid application of the precautionary principle, progress and prosperity would suffer.

Heavy-handed preemptive restraints on creative acts can have deleterious effects because they raise barriers to entry, increase compliance costs, and create more risk and uncertainty for entrepreneurs and investors. Thus, it is the unseen costs—primarily in the form of forgone innovation opportunities—that makes the precautionary principle so problematic as a policy default. This is why scientist Martin Rees speaks of “the hidden cost of saying no” that is associated with the precautionary principle.[6]

The precise way the precautionary principle leads to this result is that it derails the so-called learning curve by limiting opportunities to learn from trial-and-error experimentation with new and better ways of doing things.[7] The learning curve refers to the way that individuals, organizations, or industries are able to learn from their mistakes, improve their designs, enhance productivity, lower costs, and then offer superior products based on the resulting knowledge.[8] In his recent book, Where Is My Flying Car?, J. Storrs Hall documents how, over the last half century, “regulation clobbered the learning curve” for many important technologies in the U.S., especially nuclear, nanotech, and advanced aviation.[9] Hall shows how society was denied many important innovations due to endless foot-dragging or outright opposition to change from special interests, anti-innovation activists, and over-zealous bureaucrats.

In many cases, innovators don’t even know what they are up against because, as many scholars have noted, “the precautionary principle, in all of its forms, is fraught with vagueness and ambiguity.”[10] It creates confusion and fear about the wisdom of taking action in the face of uncertainty. Worst case thinking paralyzes regulators who aim to “play it safe” at all costs. The result is an endless snafu of red tape as layer upon layer of mandates build up and block progress. The result is what many scholars now decry as a culture of “vetocracy,” which describes the many veto points within modern political systems that hold back innovation, development and economic opportunity.[11] This endless accumulation of potential veto points in the policy process in the form of mandates and restrictions can greatly curtail innovation opportunities. “Like sediment in a harbor, law has steadily accumulated, mainly since the 1960s, until most productive activity requires slogging through a legal swamp,” says Philip K. Howard, chair of Common Good.[12] “Too much law,” he argues, “can have similar effects as too little law,” because:

People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles. They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error.[13]

This is exactly why it is important that policymakers not get too caught up in attempts to preemptively resolve every potential hypothetical worst case scenarios associated with AI technologies. The problem with that approach was succinctly summarized by the political scientist Aaron Wildavsky when he noted, “If you can do nothing without knowing first how it will turn out, you cannot do anything at all.”[14] Or, as I have stated in a book on this topic, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”[15]

This does not mean society should dismiss all concerns about the risks surrounding AI. Some technological risks do necessitate a degree of precautionary policy, but proportionality is crucial, notes Gabrielle Bauer, a Toronto-based medical writer. “Used too liberally,” she argues, “the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.”[16] It is not enough to simply hypothesize that certain AI innovations might entail some risk. The critics need to prove it using risk analysis techniques that properly weigh both the potential costs and benefits.[17] Moreover, when conducting such analyses, the full range of trade-offs associated with preemptive regulation must be evaluated. Again, where precautionary constraints might deny society life-enriching devices or services, those costs must be acknowledged.

Generally speaking, the most extreme precautionary controls should only be imposed when the potential harms in question are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.[18] In the context of AI and ML systems, it may be the case that such a test is satisfied already for law enforcement use of certain algorithmic profiling techniques. And that test is satisfied for so-called “killer robots,” or autonomous military technology.[19] These are often described as “existential risks.” The precautionary principle is the right default in these cases because it is abundantly clear how unrestricted use would have catastrophic consequences. For similar reasons, governments have long imposed comprehensive restrictions on certain types of weapons.[20] And although nuclear and chemical technologies have many important applications, their use must also be limited to some degree even outside of militaristic applications because they can pose grave danger if misused.

But the vast majority of AI-enabled technologies are not like this. Most innovations should not be treated the same a hand grenade or a ticking time bomb. In reality, most algorithmic failures will be more mundane and difficult to foresee in advance. By their very nature, algorithms are constantly evolving because programs and systems are being endlessly tweaked by designers to improve them. In his books on the evolution of engineering and systems design, Henry Petroski has noted that “the shortcomings of things are what drive their evolution.”[21] The normal state of things is “ubiquitous imperfection,” he notes, and it is precisely that reality that drives efforts to continuously innovate and iterate.[22]

Regulations rooted in the precautionary principle hope to preemptively find and address product imperfections before any harm comes from them. In reality, and as explained more below, it is only through ongoing experimentation that we find both the nature of failures and the knowledge to know how to correct them. As Petroski observes, “the history of engineering in general, may be told in its failures as well as in its triumphs. Success may be grand, but disappointment can often teach us more.”[23] This is particularly true for complex algorithmic systems, where rapid-fire innovation and incessant iteration are the norm.

Importantly, the problem with precautionary regulation for AI is not just that it might be over-inclusive in seeking to regulate hypothetical problems that never develop. Precautionary regulation can also be under-inclusive by missing problematic behavior or harms that no one anticipated before the fact. Only experience and experimentation reveal certain problems.

In sum, we should not presume that there is a clear preemptive regulatory solution to every problem some people raise about AI, nor should we presume we can even accurately identify all such problems that might come about in the future. Moreover, some risks will never be eliminated entirely, meaning that risk mitigation is the wiser approach. This is why a more flexible bottom-up governance strategy focused on responsiveness and resiliency makes more sense than heavy-handed, top-down strategies that would only avoid risks by making future innovations extremely difficult if not impossible.

The “Proactionary Principle” is the Better Default for AI Policy

The previous section made it clear why the precautionary principle should generally not be used as our policy default if we hope to encourage the development of AI applications and services. What we need is a policy approach that:

  • objectively evaluates the concerns raised about AI systems and applications;
  • considers whether more flexible governance approaches might be available to address them; and,
  • does so without resorting to the precautionary principle as a first-order response.

The proactionary principle is the better general policy default for AI because it satisfies these three objectives.[24] Philosopher Max More defines the proactionary principle as the idea that policymakers should, “[p]rotect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.”[25] There are different names for this same concept, including the innovation principle, which Daniel Castro and Michael McLaughlin of the Information Technology and Innovation Foundation say represents the belief that “the vast majority of new innovations are beneficial and pose little risk, so government should encourage them.”[26] Permissionless innovation is another name for the same idea. Permissionless innovation refers to the idea that experimentation with new technologies and business models should generally be permitted by default.[27]

What binds these concepts together is the belief that innovation should generally be treated as innocent until proven guilty. There will be risks and failures, of course, but the permissionless innovation mindset views them as important learning experiences. These experiences are chances for individuals, organizations, and all of society to make constant improvements through incessant experimentation with new and better ways of doing things.[28] As Virginia Postrel argued in her 1998 book, The Future and Its Enemies, progress demands “a decentralized, evolutionary process” and mindset in which mistakes are not viewed as permanent disasters but instead as “the correctable by-products of experimentation.”[29] “No one wants to learn by mistakes,” Petroski once noted, “but we cannot learn enough from successes to go beyond the state of the art.”[30] Instead we must realize, as other scholars have observed, that “[s]uccess is the culmination of many failures”[31] and understand “failure as the natural consequence of risk and complexity.”[32]

This is why the default for public policy for AI innovation should, whenever possible, be more green lights than red ones to allow for the maximum amount of trial-and-error experimentation, which encourages ongoing learning.[33] “Experimentation matters,” observes Stefan H. Thomke of the Harvard Business School, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”[34]

Obviously, risks and mistakes are “the very things regulators inherently want to avoid,”[35] but “if innovators fear they will be punished for every mistake,” Daniel Castro and Alan McQuinn argue, “then they will be much less assertive in trying to develop the next new thing.”[36] And for all the reasons already stated, that would represent the end of progress because it would foreclose the learning process that allows society to discover new, better, and safer ways of doing things. Technology author Kevin Kelly puts it this way:

technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.[37]

In other words, the proactionary principle appreciates the benefits that flow from learning by doing. The goal is to continuously assess and prioritize risks from natural and human-made systems alike, and then formulate and reformulate our toolkit of possible responses to those risks using the most practical and effective solutions available. This should make it clear that the proactionary approach is not synonymous with anarchy. Various laws, government bodies, and especially the courts play an important role in protecting rights, health, and order. But policies need to be formulated such that innovators and innovation are given the benefit of the doubt and risks are analyzed and addressed in a more flexible fashion.

Some of the most effective ways to address potential AI risks already exist in the form of “soft law” and decentralized governance solution. These will be discussed at greater length below. But existing legal remedies include various common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies. Ex post remedies are generally superior to ex ante prior restraints if we hope to maximize innovation opportunities. Ex ante regulatory defaults are too often set closer to the red light of the precautionary principle and then enforced through volumes of convoluted red tape.

This is what the World Economic Forum has referred to as a “regulate-and-forget” system of governance,[38] or what others call a “build-and-freeze model” or regulation.[39] In such technological governance regimes, older rules are almost never revisited, even after new social, economic, and technical realities render them obsolete or ineffective.[40] A 2017 survey of U.S. Code of Regulations by Deloitte consultants revealed that 68 percent of federal regulations have never been updated and that 17 percent have only been updated once.[41] Public policies for complex and fast-moving technologies like AI cannot be set in stone and forgotten like that if America hopes to remain on the cutting edge of this sector.

Advocates of the proactionary principle look to counter this problem not by eliminating all laws or agencies, but by bringing them in line with flexible governance principles rooted in more decentralized approaches to policy concerns.[42] As many regulatory advocates suggest, it is important to embed or “bake in” various ethical best practices into AI systems to ensure that they benefit humanity. But this, too, is a process of ongoing learning and there are many ways to accomplish such goals without derailing important technological advances. What is often referred to as “value alignment” or “ethically-aligned design” is challenged by the fact that humans regularly disagree profoundly about many moral issues.[43] “Before we can put our values into machines, we have to figure out how to make our values clear and consistent,” says Harvard University psychologist Joshua D. Greene.[44]

The “Three Laws of Robotics” famously formulated decades ago by Isaac Asimov in his science fiction stories continue to be widely discussed today as a guide to embedding ethics into machines.[45] They read:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

What is usually forgotten about these principles, as AI expert Melanie Mitchell reminds us, is the way Asimov, “often focused on the unintended consequences of programming ethical rules into robots,” and how he made it clear that, if applied too literally, “such a set of rules would inevitably fail.”[46]

This is why flexibility and humility are essential virtues when thinking about AI policy. The optimal governance regime for AI can be shaped by responsible innovation practices and embed important ethical principles by design without immediately defaulting to a rigid application of the precautionary principle.[47] In other words, an innovation policy regime rooted in the proactionary principle can also be infused with the same values that animate a precautionary principle-based system.[48] The difference is that the proactionary principle-based approach will look to achieve these goals in a more flexible fashion using a variety of experimental governance approaches and ex post legal enforcement options, while also encouraging still more innovation to solve problems past innovations may have caused.

To reiterate, not every AI risk is foreseeable, and many risks and harms are more amorphous or uncertain. In this sense, the wisest governance approach for AI was recently outlined by the National Institute of Standards and Technology (NIST) in its initial draft AI Risk Management Framework, which is a multistakeholder effort “to describe how the risks from AI-based systems differ from other domains and to encourage and equip many different stakeholders in AI to address those risks purposefully.”[49] NIST notes that the goal of the Framework is:

to be responsive to new risks as they emerge rather than enumerating all known risks in advance. This flexibility is particularly important where impacts are not easily foreseeable, and applications are evolving rapidly. While AI benefits and some AI risks are well-known, the AI community is only beginning to understand and classify incidents and scenarios that result in harm.[50]

This is a sensible framework for how to address AI risks because it makes it clear that it will be difficult to preemptively identify and address all potential AI risks. At the same time, there will be a continuing need to advance AI innovation while addressing AI-related harms. The key to striking that balance will be decentralized governance approaches and soft law techniques described below.

[Note: The subsequent sections of the study will detail how decentralized governance approaches and soft law techniques already are helping to address concerns about AI risks.]

Endnotes:

[1]     Adam Thierer, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, 2nd ed. (Arlington, VA: Mercatus Center at George Mason University, 2016): 1-6, 23-38; Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 48-54.

[2]     “Wingspread Statement on the Precautionary Principle,” January 1998, https://www.gdrc.org/u-gov/precaution-3.html.

[3]     Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (Cambridge, UK: Cambridge University Press, 2005). (“The Precautionary Principle takes many forms. But in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”)

[4]     Henk van den Belt, “Debating the Precautionary Principle: ‘Guilty until Proven Innocent’ or ‘Innocent until Proven Guilty’?” Plant Physiology 132 (2003): 1124.

[5]     H.W. Lewis, Technological Risk (New York: WW. Norton & Co., 1990): x. (“The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement.”)

[6]     Martin Rees, On the Future: Prospects for Humanity (Princeton, NJ: Princeton University Press, 2018): 136.

[7]     Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[8]     Adam Thierer, “How to Get the Future We Were Promised,” Discourse, January 18, 2022, https://www.discoursemagazine.com/culture-and-society/2022/01/18/how-to-get-the-future-we-were-promised.

[9]     J. Storrs Hall, Where Is My Flying Car? (San Francisco: Stripe Press, 2021)

[10]    Derek Turner and Lauren Hartzell Nichols, “The Lack of Clarity in the Precautionary Principle,” Environmental Values, Vol 13, No. 4 (2004): 449.

[11]    William Rinehart, “Vetocracy, the Costs of Vetos and Inaction,” Center for Growth & Opportunity at Utah State University, March 24, 2022, https://www.thecgo.org/benchmark/vetocracy-the-costs-of-vetos-and-inaction; Adam Thierer, “Red Tape Reform is the Key to Building Again,” The Hill, April 28, 2022, https://thehill.com/opinion/finance/3470334-red-tape-reform-is-the-key-to-building-again.

[12]    Philip K. Howard, “Radically Simplify Law,” Cato Institute, Cato Online Forum, http://www.cato.org/publications/cato-online-forum/radically-simplify-law.

[13]    Ibid.

[14]    Aaron Wildavsky, Searching for Safety (New Brunswick, NJ: Transaction Publishers, 1989): 38.

[15]    Thierer, Permissionless Innovation, at 2.

[16]    Gabrielle Bauer, “Danger: Caution Ahead,” The New Atlantis, February 4, 2022, https://www.thenewatlantis.com/publications/danger-caution-ahead.

[17]    Richard B. Belzer, “Risk Assessment, Safety Assessment, and the Estimation of Regulatory Benefits” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, 2012), 5, http://mercatus.org/publication/risk-assessment-safety-assessment-and-estimation-regulatory-benefits; John D. Graham and Jonathan Baert Wiener, eds. Risk vs. Risk: Tradeoffs in Protecting Health and the Environment, (Cambridge, MA: Harvard University Press, 1995).

[18]    Thierer, Permissionless Innovation, at 33-8.

[19]    Adam Satariano, Nick Cumming-Bruce and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing,” New York Times, December 17, 2021, https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html.

[20]    Adam Thierer, “Soft Law: The Reconciliation of Permissionless & Responsible Innovation,” in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240, https://www.mercatus.org/publications/technology-and-innovation/soft-law-reconciliation-permissionless-responsible-innovation.

[21]    Henry Petroski, The Evolution of Useful Things (New York: Vintage Books, 1994): 34.

[22]    Ibid., 27,

[23]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 9.

[24]    James Lawson, These Are the Droids You’re Looking For: An Optimistic Vision for Artificial Intelligence, Automation and the Future of Work (London: Adam Smith Institute, 2020): 86, https://www.adamsmith.org/research/these-are-the-droids-youre-looking-for.

[25]    Max More, “The Proactionary Principle (March 2008),” Max More’s Strategic Philosophy, March 28, 2008, http://strategicphilosophy.blogspot.com/2008/03/proactionary-principle-march-2008.html.

[26]    Daniel Castro & Michael McLaughlin, “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” Information Technology and Innovation Foundation, February 4, 2019, https://itif.org/publications/2019/02/04/ten-ways-precautionary-principle-undermines-progress-artificial-intelligence.

[27]    Thierer, Permissionless Innovation.

[28]    Thierer, “Failing Better.”

[29]    Virginia Postrel, The Future and Its Enemies (New York: The Free Press, 1998): xiv.

[30]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 62.

[31]    Kevin Ashton, How to Fly a Horse: The Secret History of Creation, Invention, and Discovery (New York: Doubleday, 2015): 67.

[32]    Megan McArdle, The Up Side of Down: Why Failing Well is the Key to Success (New York: Viking, 2014), 214.

[33]    F. A. Hayek, The Constitution of Liberty (London: Routledge, 1960, 1990): 81. (“Humiliating to human pride as it may be, we must recognize that the advance and even preservation of civilization are dependent upon a maximum of opportunity for accidents to happen.”)

[34]    Stefan H. Thomke, Experimentation Matters: Unlocking the Potential of New Technologies for Innovation (Harvard Business Review Press, 2003), 1.

[35]    Daniel Castro and Alan McQuinn, “How and When Regulators Should Intervene,” Information Technology and Innovation Foundation Reports, (February 2015): 2 http://www.itif.org/publications/how-and-when-regulators-should-intervene.

[36]    Ibid.

[37]    Kevin Kelly, “The Pro-Actionary Principle,” The Technium, November 11, 2008, https://kk.org/thetechnium/the-pro-actiona.

[38]    World Economic Forum, Agile Regulation for the Fourth Industrial Revolution (Geneva: Switzerland: 2020): 4, https://www.weforum.org/projects/agile-regulation-for-the-fourth-industrial-revolution.

[39]    Jordan Reimschisel and Adam Thierer, “’Build & Freeze’ Regulation Versus Iterative Innovation,” Plain Text, November 1, 2017, https://readplaintext.com/build-freeze-regulation-versus-iterative-innovation-8d5a8802e5da.

[40]    Adam Thierer, “Spring Cleaning for the Regulatory State,” AIER, May 23, 2019, https://www.aier.org/article/spring-cleaning-for-the-regulatory-state.

[41]    Daniel Byler, Beth Flores & Jason Lewris, “Using Advanced Analytics to Drive Regulatory Reform: Understanding Presidential Orders on Regulation Reform,” Deloitte, 2017, https://www2.deloitte.com/us/en/pages/public-sector/articles/advanced-analytics-federal-regulatory-reform.html.

[42]    Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowed.

[43]    Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York: W.W. Norton & Company, 2020).

[44]    Joshua D. Greene, “Our Driverless Dilemma,” Science (June 2016): 1515.

[45]    Susan Leigh Anderson, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics,” AI and Society, Vol. 22, No. 4, (2008): 477-493.

[46]    Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (New York: Farrar, Straus and Giroux, 2019): 126 [Kindle edition.]

[47]    Thomas A. Hemphill, “The Innovation Governance Dilemma: Alternatives to the Precautionary Principle,” Technology in Society, Vol. 63 (2020): 6, https://ideas.repec.org/a/eee/teinso/v63y2020ics0160791x2030751x.html.

[48]    Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[49]    The National Institute of Standards and Technology, “AI Risk Management Framework: Initial Draft,” (March 17, 2022): 1, https://www.nist.gov/itl/ai-risk-management-framework.

[50]    Ibid., at 5.

]]>
https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/feed/ 4 76994
5 Tech Policy Topics to Follow in the Biden Administration and 117th Congress https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/ https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/#comments Thu, 12 Nov 2020 14:08:17 +0000 https://techliberation.com/?p=76818

In a five-part series at the American Action Forum, I presented prior to the 2020 presidential election the candidates’ positions on a range of tech policy topics including: the race to 5GSection 230antitrust, and the sharing economy. Now that the election is over, it is time to examine what topics in tech policy will gain more attention and how the debate around various tech policy issues may change. In no particular order, here are five key tech policy issues to be aware of heading into a new administration and a new Congress. 

The  Use of Soft Law for Tech Policy 

In 2021, it is likely America will still have a divided government with Democrats controlling the White House and House of Representatives and Republicans expected to narrowly control the Senate. The result of a divided government, particularly between the two houses of Congress, will likely be that many tech policy proposals face logjams. The result will likely be that many of the questions of tech policy lack the legislation or hard law framework that might be desired. As a result, we are likely to continue to see “soft law”—regulation by various sub-regulatory means such as guidance documents, workshops, and industry consultations—rather than formal action. While it appears we will see more formal regulatory action from the administrative state as well in a Biden Administration, these actions require quite a process through comments and formal or informal rulemaking. As technology continues to accelerate, many agencies turn to soft law to avoid “pacing problems” where policy cannot react as quickly as technology and rules may be outdated by the time they go into effect. 

A soft law approach can be preferable to a hard law approach as it is often able to better adapt to rapidly changing technologies. Policymakers in this new administration, however, should work to ensure that they are using this tool in a way that enables innovation and that appropriate safeguards ensure that these actions do not become a crushing regulatory burden. 

Return of  the  Net Neutrality  Debate 

One key difference between President Trump and President-elect Biden’s stances on tech policy concerns whether the Federal Communication Commission (FCC) should categorize internet service providers (ISPs) as Title II “common carrier services,” thereby enabling regulations such as “net neutrality” that places additional requirements on how these service providers can prioritize data. President-elect Biden has been clear in the past that he favors reinstating net neutrality. 

The imposition of this classification and regulations occurred during the Obama Administration and the FCC removed both the classification under Title II and the additional regulations for “net neutrality” during the Trump Administration. Critics of these changes made many hyperbolic claims at the time such as that Netflix would be interrupted or that ISPs would use the freedom in a world without net neutrality to block abortion resources or pro-feminist groups. These concerns have proven to be misguided. If anything, the COVID-19 pandemic has shown the benefits to building a robust internet infrastructure and expanded investment that a light-touch approach has yielded. 

It is likely that net neutrality will once again be debated. Beyond just the imposition of these restrictions, a repeated change in such a key classification could create additional regulatory uncertainty and deter or delay investment and innovation in this valuable infrastructure. To overcome such concerns, congressional action could help fashion certainty in a bipartisan and balanced way to avoid a back-and-forth of such a dramatic nature. 

Debates Regarding  Sharing Economy Providers   Classification  as Independent Contractors 

California voters passed Proposition 22 undoing the misguided reclassification of app-based service drivers as employees rather than independent contractors under AB5; during the campaign, however, President-elect Biden stated that he supports AB5 and called for a similar approach nationwide. Such an approach would make it more difficult on new sharing economy platforms and a wide range of independent workers (such as freelance journalists) at a time when the country is trying to recover economically.  

Changing classifications to make it more difficult to consider service providers as independent contractors makes it less likely that platforms such as Fiverr or TaskRabbit could provide platforms for individuals to offer their skills. This reclassification as employees also misunderstands the ways in which many people choose to engage in gig economy work and the advantages that flexibility has. As my AAF colleague Isabel Soto notes, the national costs of a similar approach found in the Protecting the Right to Organize (PRO) Act “could see between $3.6 billion and $12.1 billion in additional costs to businesses” at a time when many are seeking to recover during the recession. Instead, both parties should look for solutions that continue to allow the benefits of the flexible arrangements that many seek in such work, while allowing for creative solutions and opportunities for businesses that wish to provide additional benefits to workers without risking reclassification. 

Shifting Conversations and Debates Around Section 230 

Section 230 has recently faced most of its criticism from Republicans regarding allegations of anti-conservative bias. President-elect Biden, however, has also called to revoke Section 230 and to set up a taskforce regarding “Online Harassment and Abuse.” While this may seem like a positive step to resolving concerns about online content, it could also open the door to government intervention in speech that is not widely agreed upon and chip away at the liability protection for content moderation. 

For example, even though the Stop Enabling Sex Trafficking Act was targeting the heinous crime of sex trafficking (which was already not subject to Section 230 protection) was aimed at companies such as Backpage where it was known such illegal activity was being conducted, it has resulted in legitimate speech such as Craigslist personal ads being removed  and companies such as Salesforce being subjected to lawsuits for what third parties used their product for. A carveout for hate speech or misinformation would only pose more difficulties for many businesses. These terms to do not have clearly agreed-upon meanings and often require far more nuanced understanding for content moderation decisions. To enforce changes that limit online speech even on distasteful and hateful language in the United States would dramatically change the interpretation of the First Amendment that has ruled such speech is still protected and would result in significant intrusion by the government for it to be truly enforced. For example, in the UK, an average of nine people a day were questioned or arrested over offensive or harassing “trolling” in online posts, messages, or forums under a law targeting online harassment and abuse such as what the taskforce would be expected to consider. 

Online speech has provided new ways to connect, and Section 230 keeps the barriers to entry low. It is fair to be concerned about the impact of negative behavior, but policymakers should also recognize the impact that online spaces have had on allowing marginalized communities to connect and be concerned about the unintended consequences changes to Section 230 could have. 

Continued Antitrust Scrutiny of “Big Tech” 

One part of the “techlash” that shows no sign of diminishing in the new administration or new Congress is using antitrust to go after “Big Tech.” While it remains to be seen if the Biden Department of Justice will continue the current case against Google, there are indications that they and congressional Democrats will continue to go after these successful companies with creative theories of harm that do not reflect the current standards in antitrust. 

Instead of assuming a large and popular company automatically merits competition scrutiny  or attempting to utilize antitrust to achieve policy changes for which it is an ill-fitted tool, the next administration should return to the principled approach of the consumer welfare standard. Under such an approach, antitrust is focused on consumers and not competitors. In this regard, companies would need to be shown to be dominant in their market, abusing that dominance in some ways, and harming consumers. This approach also provides an objective standard that lets companies and consumers know how actions will be considered under competition law. With what is publicly known, the proposed cases against the large tech companies fail at least one element of this test. 

There will likely be a shift in some of the claimed harms, but unfortunately scrutiny of large tech companies and calls to change antitrust laws to go after these companies are likely to continue. 

Conclusion 

There are many other technology and innovation issues the next administration and Congress will see. These include not only the issues mentioned above, but emerging technologies like 5G, the Internet of Things, and autonomous vehicles. Other issues such as the digital divide provide an opportunity for policymakers on both sides of the aisle to come together and have a beneficial impact and think of creative and adaptable solutions. Hopefully, the Biden Administration and the new Congress will continue a light-touch approach that allows entrepreneurs to engage with innovative ideas and continues American leadership in the technology sector. 

]]>
https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/feed/ 1 76818