Search Results for “technopanic” – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Thu, 03 Apr 2025 23:20:10 +0000 en-US hourly 1 6772528 On “Pausing” AI https://techliberation.com/2023/04/07/on-pausing-ai/ https://techliberation.com/2023/04/07/on-pausing-ai/#respond Fri, 07 Apr 2023 17:36:05 +0000 https://techliberation.com/?p=77111

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others  calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon  Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space.

That report is the first in a trilogy of big studies on decentralized, flexible governance of artificial intelligence. We can achieve AI safety without crushing top-down bans or unworkable “pauses,” I argue. My next two papers are, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” (due out April 20th) and “Existential Risks & Global Governance Issues Surrounding AI & Robotics” (due out late May or early June). I’m also working on a co-authored essay taking a deep dive into the idea of AI impact assessments / auditing (late Spring / early Summer).

Relatedly, on April 7th, DeepLearningAI held an event on “Why at 6-Month AI Pause is a Bad Idea” featuring leading AI scientists Andrew Ng and Yann LeCun discussing the trade-offs associated with the proposal. A crucial point made in the discussion is that a pause, especially a pause in the form of a governmental ban, would be a misguided innovation policy decision. They stressed that there will be policy interventions to address targeted risks from specific algorithmic applications, but that it would be a serious mistake to stop the overall development of the underlying technological capabilities. It’s worth watching.

For more on AI policy, here’s a list of some of my latest reports and essays. Much more to come. AI policy will be the biggest tech policy fight of the our lifetimes.

 

 

]]>
https://techliberation.com/2023/04/07/on-pausing-ai/feed/ 0 77111
Tech Regulation Will Increasingly Be Driven Through the Prism of “Algorithmic Fairness” https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/ https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/#respond Sun, 06 Nov 2022 18:51:21 +0000 https://techliberation.com/?p=77056

We are entering a new era for technology policy in which many pundits and policymakers will use “algorithmic fairness” as a universal Get Out of Jail Free card when they push for new regulations on digital speech and innovation. Proposals to regulate things like “online safety,” “hate speech,” “disinformation,” and “bias” among other things often raise thorny definitional questions because of their highly subjective nature. In the United States, efforts by government to control these things will often trigger judicial scrutiny, too, because restraints on speech violate the First Amendment. Proponents of prior restraint or even ex post punishments understand this reality and want to get around it. Thus, in an effort to avoid constitutional scrutiny and lengthy court battles, they are engaged in a rebranding effort and seeking to push their regulatory agendas through a techno-panicky prism of “algorithmic fairness” or “algorithmic justice.”

Hey, who could possibly be against FAIRNESS and JUSTICE? Of course, the devil is always in the details as Neil Chilson and I discuss in our new paper for the The Federalist Society and Regulatory Transparency Project on, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations.” We document how federal and state policymakers from both parties are currently considering a variety of new mandates for artificial intelligence (AI), machine learning, and automated systems that, if imposed, “would thunder through our economy with one of the most significant expansions of economic and social regulation – and the power of the administrative state – in recent history.”

We note how, at the federal level, bills are being floated with titles like the “Algorithmic Justice and Online Platform Transparency Act” and the “Protecting Americans from Dangerous Algorithms Act,” which would introduce far-reaching regulations requiring AI innovators to reveal more about how their algorithms work or even hold them liable if their algorithms are thought to be amplifying hateful or extremist content. Other proposed measures like the “Platform Accountability and Consumer Transparency Act” and the “Online Consumer Protection Act” would demand greater algorithmic transparency as it relates to social media content moderation policies and procedures. Finally, measures like the “Kids Online Safety Act” would require audits of algorithmic recommendation systems that supposed targeted or harmed children. Algorithmic regulation is also creeping into proposed privacy regulations, such as the “American Data Protection and Privacy Act of 2022.”

And then there are all the state laws–many of which have been pushed by conservatives–that would mandate “algorithmic transparency” for social media content moderation in the name of countering supposed viewpoint bias. Bills in Florida and Texas take this approach. Meanwhile, conservatives in Congress Senator Josh Hawley’s (R-MO) push for bills like the “Ending Support for Internet Censorship Act” that requires large tech companies undergo external audits proving that their algorithms and content-moderation techniques are politically unbiased. It’s an open invitation to regulators and trial lawyers to massively regulate technology and speech under the guise of “algorithmic fairness.” Countless left-leaning law professors and European officials have already proposed a comprehensive algorithmic audit apparatus to regulate innovators in every sector.

It’s the rise of the Code Cops. If we continue down this path, it ends with a complete rejection of the permissionless innovation ethos that made America’s information technology sector a global powerhouse. Instead, we’ll be stuck with the very worst type of “Mother, May I” precautionary principle-based regulatory regime that will be imposing the equivalent of occupational licensing requirements for coders.

If code is speech, algorithms are as well. Defenders of innovation freedom need to step up and prepare for the fight to come. [See my earlier essay, “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead.”] Chilson and I outline the broad contours of the battle for freedom of speech and the freedom to innovation that is brewing. It will be the most important technology policy issue of the next ten years. I hope you take the time to read our new essay and understand why. And below you will find a few dozen more essay on the same topic if you’d like to dig even deeper.

Additional Reading :

 

]]>
https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/feed/ 0 77056
AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/ https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/#comments Mon, 12 Sep 2022 23:52:26 +0000 https://techliberation.com/?p=77039

[Cross-posted from Medium.]

The Coming Computational Revolution

Thomas Edison once spoke of how electricity was a “field of fields.” This is even more true of AI, which is ready to bring about a sweeping technological revolution. In Carlota Perez’s influential 2009 paper on “Technological Revolutions and Techno-economic Paradigms,” she defined a technological revolution “as a set of interrelated radical breakthroughs, forming a major constellation of interdependent technologies; a cluster of clusters or a system of systems.” To be considered a legitimate technological revolution, Perez argued, the technology or technological process must be “opening a vast innovation opportunity space and providing a new set of associated generic technologies, infrastructures and organisational principles that can significantly increase the efficiency and effectiveness of all industries and activities.” In other words, she concluded, the technology must have “the power to bring about a transformation across the board.”

Expanding Our Skillset

Thus, AI (and AI policy) is multi-dimensional, amorphous, and ever-changing. It has many layers and complexities. This will require public policy analysts and institutions to reorient their focus and develop new capabilities.

Mapping the AI Policy Terrain: Broad vs. Narrow

Beyond talent development, the other major challenge is issue coverage. How can we cover all the AI policy bases? There are two general categories of AI concerns, and supporters of free markets need to be prepared to engage on both battlefields.

Confronting the Formidable Resistance to Change

Finally, free-market analysts and organizations must prepare to defend the general concept of progress through technological change as AI becomes a central social, economic, and legal battleground — both domestically and globally. Every technological revolution involves major social and economic disruptions and gives rise to intense efforts to defend the status quo and block progress. As Perez concludes, “the profound and wide-ranging changes made possible by each technological revolution and its techno-economic paradigm are not easily assimilated; they give rise to intense resistance.”

]]>
https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/feed/ 1 77039
AI Governance “on the Ground” vs “on the Books” https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/ https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/#respond Wed, 24 Aug 2022 15:14:56 +0000 https://techliberation.com/?p=77028

[Cross-posted from Medium]

There are two general types of technological governance that can be used to address challenges associated with artificial intelligence (AI) and computational sciences more generally. We can think of these as “on the ground” (bottom-up, informal “soft law”) governance mechanisms versus “on the books” (top-down, formal “hard law”) governance mechanisms.

Unfortunately, heated debates about the latter type of governance often divert attention from the many ways in which the former can (or already does) help us address many of the challenges associated with emerging technologies like AI, machine learning, and robotics. It is important that we think harder about how to optimize these decentralized soft law governance mechanisms today, especially as traditional hard law methods are increasingly strained by the relentless pace of technological change and ongoing dysfunctionalism in the legislative and regulatory arenas.

On the Grounds vs. On the Books Governance

Let’s unpack these “on the ground” and “on the books” notions a bit more. I am borrowing these descriptors from an important 2011 law review article by Kenneth A. Bamberger and Deirdre K. Mulligan, which explored the distinction between what they referred to as “Privacy on the Books and on the Ground.” They identified how privacy best practices were emerging in a decentralized fashion thanks to the activities of corporate privacy officers and privacy associations who helped formulate best practices for data collection and use.

The growth of privacy professional bodies and non­profit organizations — especially the International Association of Privacy Profession­als (IAPP) — helped better formalize privacy best practices by establishing and certifying internal champions to uphold key data-handling principles with organizations. By 2019, the IAPP had over 50,000 trained members globally, and its numbers keep swelling. Today, it is quite common to find Chief Privacy Officers throughout the corporate, governmental, and non-profit world.

These privacy professionals work together and in conjunction with a wide diversity of other players to “bake-in” widely-accepted information collection/ use practices within all these organizations. With the help of IAPP and other privacy advocates and academics, these professionals also look to constantly refine and improve their standards to account for changing circumstances and challenges in our fast-paced data economy. They also look to ensure that organizations live up to commitments they have made to the public or even governments to abide by various data-handling best practices.

Soft Law vs. Hard Law

These “on the ground” efforts have helped usher in a variety of corporate social responsibility best practices and provide a flexible governance model that can be a compliment to, or sometimes even a substitute for, formal “on the books” efforts. We can also think of this as the difference between soft law and hard law.

Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Soft law can take many forms, including guidelines, best practices, agency consultations & workshops, multistakeholder initiatives, and other experimental types of decentralized, non-binding commitments and efforts.

Soft law has become a bit of a gap-filler in the U.S. as hard law efforts fail for various reasons. The most obvious explanations for why the role of hard law governance has shrunk is that it’s just very hard for law to keep up with fast-moving technological developments today. This is known as the pacing problem. Many scholars have identified how the pacing problem gives rise to a “governance gap” or “competency trap” for policymakers because, just as quickly as they are coming to grips with new technological developments, other technologies are emerging quickly on their heels.

Think of modern technologies — especially informational and computational technologies — like a series of waves that come flowing in to shore faster and faster. As soon as one wave crests and then crashes down, another one comes right after it and soaks you again before you’ve had time to recover from the daze of the previous ones hitting you. In a world of combinatorial innovation, in which technologies build on top of one another in a symbiotic fashion, this process becomes self-reinforcing and relentless. For policymakers, this means that just when they’ve worked their way up one technological learning curve, the next wave hits and forces them to try to quickly learn about and prepare for the next one that has arrived. Lawmakers are often overwhelmed by this flood of technological change, making it harder and harder for policies to get put in place in a timely fashion — and equally hard to ensure that any new or even existing policies stay relevant as all this rapid-fire innovation continues.

Legislative dysfunctionalism doesn’t help. Congress has a hard time advancing bills on many issues, and technical matters often get pushed to the bottom of the priorities list. The end result is that Congress has increasingly become a non-actor on tech policy in the U.S. Most of the action lies elsewhere.

What’s Your Backup Plan?

This means there is a powerful pragmatic case for embracing soft law efforts that can at least provide us with some “on the ground” governance efforts and practices. Increasingly, soft law is filling the governance gap because hard law is failing for a variety of reasons already identified. Practically speaking, even if you are dead set on imposing a rigid, top-down, technocratic regulatory regime on any given sector or technology, you should at least have a backup plan in mind if you can’t accomplish that.

This is why privacy governance in the United States continues to depend heavily on such soft law efforts to fill the governance vacuum after years of failed attempts to enact a formal federal privacy law. While many academics and others continue to push for such an over-arching data handling law, bottom-up soft law efforts have played an important role in balancing privacy and innovation.

In a similar way, “on the ground” governance efforts are already flourishing for artificial intelligence and machine learning as policymakers continue to very slowly consider whether new hard law initiatives are wise or even possible. For example, congressional lawmakers have been considering a federal regulatory framework for driverless cars for the past several sessions of Congress. Many people in Congress and in academic circles agree that a federal framework is needed, if for no other reason than to preempt the much-dreaded specter of a patchwork of inconsistent state and local regulatory policies. With so much bipartisan agreement out there on driverless car legislation, it would seem like a federal bill would be a slam dunk. For that reason, year in and year out, people always predict: this is the year we’ll get driverless car legislation! And yet, it never happens due to a combination of special interest opposition from unions and trial lawyers, in addition to the pacing problem issue and Congress focusing its limited attention on other issues.

This is also already true for algorithmic regulation. We hear lots of calls to do something, but it remains unclear what that something is or whether it will get done any time soon. If we could not get a privacy bill through Congress after at least a dozen years of major efforts, chances are that broad-based AI regulation is going to be equally challenging.

Soft Law for AI is Exploding

Thus, soft law will likely fill the governance gap for AI. It already is. I’m working on a new book that documents the astonishing array of soft law mechanisms already in place or being developed to address various algorithmic concerns. I can’t seem to finish the book because there is just so much going on related to soft law governance efforts for algorithmic systems. As Mark Coeckelbergh noted in his recent book on AI Ethics, there’s been an “avalanche of​ initiatives and policy documents” around AI ethics and best practices in recent years. It is a bit overwhelming, but the good news is that there is a lot of consistency in these governance efforts.

To illustrate, a 2019 survey by a group of researchers based in Switzerland analyzed 84 AI ethical frameworks and found “a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy).” A more recent 2021 meta-survey by a team of Arizona State University (ASU) legal scholars reviewed an astonishing 634 soft law AI programs that were formulated between 2016–2019. 36 percent of these efforts were initiated by governments, with the others being led by non-profits or private sector bodies. Echoing the findings from the Swiss researchers, the ASU report found widespread consensus among these soft law frameworks on values such as transparency and explainability, ethics/rights, security, and bias. This makes it clear that there is considerable consistency among ethical soft law frameworks in that most of them focus on a core set of values to embed within AI design. The UK-based Alan Turing Institute boils their list down to four “FAST Track Principles”: Fairness, Accountability, Sustainability, and Transparency.

The ASU scholars noted how ethical best practices for product design already influence developers today by creating powerful norms and expectations about responsible product design. “Once a soft law program is created, organizations may seek to enforce it by altering how their employees or representatives perform their duties through the creation and implementation of internal procedures,” they note. “Publicly committing to a course of action is a signal to society that generates expectations about an organization’s future actions.”

This is important because many major trade associations and individual companies have been formulating governance frameworks and ethical guidelines for AI development and use. For example, among large trade associations, the U.S. Chamber of Commerce, the Business Roundtable, the BSA | The Software Alliance, and ACT (The App Association) have all recently released major AI best practice guidelines. Notable corporate efforts to adopt guidelines for ethical AI practices include statements or frameworks by IBM, Intel, GoogleMicrosoftSalesforceSAP, and Sony, to just name a few. They are also creating internal champions to push AI ethics though either the appointment of Chief Ethical Officers, the creation of official departments, or both plus additional staff to guide the process of baking-in AI ethics by design.

Once again, there is remarkable consistency among these corporate statements in terms of the best practices and ethical guidelines they endorse. Each trade association or corporate set of guidelines align closely with the core values identified in the hundreds of other soft law frameworks that ASU scholars surveyed. These efforts go a long way toward helping to promote a culture of responsibility among leading AI innovators. We can think of this as the professionalization of AI best practices.

What Soft Law Critics Forget

Some will claim that “on the ground” soft law efforts are not enough, but they typically make two mistakes when saying so.

Their first mistake is thinking that hard law is practical or even optimal for fast-paced, highly mercurial AI and ML technologies. It’s not just that the pacing problem necessitates new thinking about governance. Critics fail to understand how hard law would likely significantly undermine algorithmic innovation because algorithmic systems can change by the minute and require a more agile and adaptive system of governance by their very nature.

This is a major focus of my book and I previously published a draft chapter from my book on “The Proper Governance Default for AI,” and another essay on “Why the Future of AI Will Not Be Invented in Europe.” These essays explain why a Precautionary Principle-oriented regulatory regime for algorithmic systems would stifle technological development, undermine entrepreneurialism, diminish competition and global competitive advantage, and even have a deleterious impact on our national security goals.

Traditional regulatory systems can be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. When innovators must seek special permission before they offer a new product or service, it raises the cost of starting a new venture and discourages activities that benefit society. We need to avoid that approach if we hope maximize the potential of AI-based technologies.

The second mistake that soft law critics make is that they fail to understand how many hard law mechanisms actually play a role in supporting soft law governance. AI applications already are regulated by a whole host of existing legal policies. If someone does something stupid or dangerous with AI systems, the Federal Trade Commission (FTC) has the power to address “unfair and deceptive practices” of any sort. And state Attorneys General and state consumer protection agencies also routinely address unfair practices and continue to advance their own privacy and data security policies, some of which are often more stringent than federal law.

Meanwhile, several existing regulatory agencies in the U.S. possess investigatory and recall authority that allows them to remove products from the market when certain unforeseen problems manifest themselves. For example, the National Highway Traffic Safety Administration (NHTSA), the Food & Drug Administration (FDA), and Consumer Product Safety Commission (CPSC) all possess broad recall authority that could be used to address risks that develop for many algorithmic or robotic systems. For example, NHTSA is currently using its investigative authority to evaluate Tesla’s claims about “full self-driving” technology and the agency has the power to take action against the company under existing regulations. Likewise, the FDA used its broad authority to crack down on genetic testing company 23andme many years ago. And CPSC and the FTC have broad authority to investigate claims made by innovators, and they’ve already used it. It’s not like our expansive regulatory state lacks considerable existing power to police new technology. If anything, the power of the administrative state is too broad and amorphous and it can be abused in certain instances.

Perhaps most importantly, our common law system can address other deficiencies with AI-based systems and applications using product defects law, torts, contract law, property law, and class action lawsuits. This is a better way of addressing risks compared to preemptive regulation of general-purpose AI technology because it at least allows the technologies to first develop and then see what actual problems manifest themselves. Better to treat innovators as innocent until proven guilty than the other way around.

There are other thorny issues that deserve serious policy consideration and perhaps even some new rules. But how risks are addressed matters deeply. Before we resort to heavy-handed, legalistic solutions for possible problems, we should exhaust all other potential remedies first.

In other words, “on the ground” soft law government mechanisms and ex post legal solutions should generally trump “ex ante (preemptive, precautionary) regulatory constraints. But we should look for ways to refine and improve soft law governance tools, perhaps through better voluntary certification and auditing regimes to hold developers to a high standard as it pertains to the important AI ethical practices we want them to uphold. This is the path forward to achieve responsible AI innovation without the heavy-handed baggage associated with more formalistic, inflexible, regulatory approaches that are ill-suited for complicated, rapidly-evolving computational and computing technologies.

___________________

Related Reading on AI & Robotics

]]>
https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/feed/ 0 77028
Why the Future of AI Will Not Be Invented in Europe https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/ https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/#comments Mon, 01 Aug 2022 18:28:40 +0000 https://techliberation.com/?p=77016

For my latest column in The Hill, I explored the European Union’s (EU) endlessly expanding push to regulate all facets of the modern data economy. That now includes a new effort to regulate artificial intelligence (AI) using the same sort of top-down, heavy-handed, bureaucratic compliance regime that has stifled digital innovation on the continent over the past quarter century.

The European Commission (EC) is advancing a new Artificial Intelligence Act, which proposes banning some AI technologies while classifying many others under a heavily controlled “high-risk” category. A new bureaucracy, the European Artificial Intelligence Board, will be tasked with enforcing a wide variety of new rules, including “prior conformity assessments,” which are like permission slips for algorithmic innovators. Steep fines are also part of the plan. There’s a lengthy list of covered sectors and technologies, with many others that could be added in coming years. It’s no wonder, then, that the measure has been labelled the measure “the mother of all AI laws” and analysts have argued it will further burden innovation and investment in Europe.

As I noted in my new column, the consensus about Europe’s future on the emerging technology front is dismal to put it mildly. The International Economy journal recently asked 11 experts from Europe and the U.S. where the EU currently stood in global tech competition. Responses were nearly unanimous and bluntly summarized by the symposium’s title: “The Biggest Loser.” Respondents said Europe is “lagging behind in the global tech race,” and “unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another analyst bluntly concluded.

That’s a grim assessment, but there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it. As I noted in my column, “the EU’s risk-averse culture and preference for paperwork compliance over entrepreneurial freedom” had serious consequences for continent-wide innovation.  I note in my recent column how:

After the continent piled on layers of data restrictions beginning in the mid-1990s, innovation and investment suffered. Regulation grew more complex with the 2018 General Data Protection Regulation (GDPR), which further limits data collection and use. As a result of all the red tape, the EU came away from the digital revolution with “the complete absence of superstar companies.” There are no serious European versions of Microsoft, Google, Facebook, Apple or Amazon. Europe’s leading providers of digital technology services today are American-based companies.

Let’s take a look at a few numbers that illustrate what’s happened in Europe’s tech sector over the past quarter century. Here’s an old KPGM breakdown of market caps for public Internet companies over an important 20 year period, from 1995 to 2015, when the digital technology marketplace was taking shape. Besides the remarkable amount of churn over that period (with only Apple appearing on both lists), the other notable thing is the complete absence of any European companies in 2015.

Next, here’s a chart I constructed using CB Insights data for global unicorns ($billion valued companies) from 2010 up through early 2022. It shows how the U.S. dominates fully half the list with China having a 16% share, but all of the European Union’s firms equal just a 9 percent slice of the world’s share.

If you want to see a per capita breakdown of VC investment by country, here’s a handy Crunchbase News chart. While the U.S. is geographically much larger than Europe, a breakdown of VC funding on a per capita basis reveals that only Estonia ($915B) and Sweden ($700B) have startup investment on par with America ($808B). No other European country has even half as much per capita VC investment as the U.S., and most don’t even have a quarter as much.

As we enter the “age of AI,” what will the EU’s same regulatory model for mean for AI, machine learning, and robotics in Europe? We do have some early data on that, too. Here’s a breakdown of AI-related VC activity and AI unicorn in 2021 from the recent State of AI Report 2021, with European countries already trailing far behind:

Also, here’s some data on recent AI investment by region from the latest Stanford “AI Index Report 2022” which again highlights a gap that is only growing larger:

It’s important to listen to what actual AI innovators across the Atlantic have to say about the new EU regulatory efforts. Just last month, the UK-based Coalition for a Digital Economy (Coadec), an advocacy group for Britain’s technology-led startups, published a report entitled, “What do AI Startups Want from Regulation?” Coadec surveyed its members to gauge their feelings about the EU’s proposed approach to AI regulation, as well as the UK’s. 76% of those startups said that their business model would be either negatively affected or become infeasible if the UK were to echo the EU by making AI developers liable, and an equal percentage said they had varying concerns about whether it’s technically even feasible to make their datasets “free of errors,” as the EU looks set to demand. Respondents also said they feared that the new AI Act would be particularly burdensome to small and mid-size entrepreneurs because they cannot afford to deal with the costly compliance hassles like the larger competitors they face. This would end of being a replay of the burdens they faced from GDPR, which decimated small businesses. “The experience of GDPR demonstrated how unclear, complex and expensive regulations drove many startups out of business, and disproportionately impact startups that survived–GDPR compliance cost startups significantly more than it did the Tech Giants,” the Coadec report concluded.

At least those UK-based innovators might be in a slightly better position post-Brexit with the British government now looking to chart a different–and much less burdensome–governance approach for digital technologies. In fact, the UK government recently released a major policy document on “Establishing a Pro-Innovation Approach to Regulating AI,” which makes a concerted effort to distinguish its approach from the EU’s. “We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI,” the report noted. “We want to encourage innovation and avoid placing unnecessary barriers in its way.” This is consistent with what the UK government has been saying on technology governance more generally. For example, in recent report advocating for Innovation Friendly Regulation, the UK government’s Regulatory Horizons Council argued that, when it comes to the regulation of emerging technologies like AI, “it is also necessary to consider the risk that the intervention itself poses.” “This would include the potential impact on benefits from a particular innovation that might be foregone; it would also include the potential creation of a ‘chilling effect’ on innovation more generally,” the Council concluded. Clearly, this approach to technology policy stands in stark contrast to the EU’s heavy-handed model. So, there is a chance that at least some innovators based in the UK can escape the EU’s regulatory hell.

What about AI innovators stuck on the European continent? What are they saying about the regulations they will soon face? The European DIGITAL SME Alliance, which is the largest network of small and medium sized enterprises (SMEs) in the European ICT sector, represents roughly 45,000 digital SMEs. In comments to the EC about the impact of the law, the Alliance highlighted how costly the AI Act’s conformity assessments and other regulations will be for smaller innovators. “This may put a burden on AI innovation” the Alliance argued, because smaller developers have limited financial and human resources of SMEs.” “[A] regulation that requires SMEs to make these significant investments, will likely push SMEs out of the market,” the group noted. “This is exactly the opposite of the intention to support a thriving and innovative AI ecosystem in Europe.” Moreover, “SMEs will not be able to pass on these costs to their customers in the final customer end pricing,” the Alliance correctly noted because, “[t[he market is global and highly competitive. Therefore, customers will choose cheaper solutions and Europe risks to be left behind in technology development and global competition.”

In March, the Alliance also hosted a forum on “The European AI Act and Digital SMEs,” which featured comments from some operators in this space. Some speakers were quite timid and you could sense that they might have feared pushing back too aggressively against the European Commission so as not to get on the bad side of regulators before the rules go into effect. But Mislav Malenica, Founder & CEO Mindsmiths didn’t pull any punches in his remarks. His company Mindsmiths is trying to build autonomous support systems in many different fields, but their ability to innovate and compete globally will be severely curtailed by the EU AI Act, he argued.

I usually don’t spend time transcribing people’s comments from events, but I went back and watched Malenica’s multiple times because his remarks are so powerful and I wanted to make sure others hear what he was saying. [Malenica’s opening comments during the event run from 42:29 to 49:34 of the video and then he has more to say during Q&A beginning at the 1:27:28 of the video.] Here’s a quick summary of a few of Malenica’s key points (listed chronologically):

  • “I’m not sure we are doing everything we can do actually to create an environment that’s innovation friendly.”
  • “we see a lot of uncertainty. We see fear.”
  • “basically we won’t be able to get funding here.”
  • while reading through the AI Act, he notes, “I don’t see start-ups being mentioned anywhere, and startups are the main vehicles of innovation.” […] “I find it very arrogant”
  • if AI Act becomes law, “what we’ll do in Europe is we’ll create a new market and that’s the AI markets based on fear,” and in how to just build products that avoid the wrath of government or lawsuits.
  • “we are really stifling innovation” and that means Europeans will have to import autonomous products from foreign companies instead of making them there.

Later, during in the Q&A period, Malenica notes how his first virtual currency startup had to use half it’s investment capital just dealing with regulatory compliance issues, and most venture capitalists wouldn’t get behind launching in Europe because of such legal hassles. He reflects upon what this mean for other innovators going forward as the EU prepares to expand their regulatory regime for AI sectors:

  • “I don’t think we’re missing talent. That’s just a consequence” of all the regulation. “We are missing a sense that you have opportunities here. If you the opportunities here, then the talent will come, the funding will come, and so on because people see that they’ll be able to make money, they’ll be able to build companies, and so on.”
  • “If we now take a look at the 10 biggest companies market capitalizations in the world, we’ll see that none of them comes actually from Europe” with U.S. tech companies dominating the list. “So, we missed that wave completely.” Why? “Because we didn’t inspire anyone to take action,” and that is about to happen for AI.
  • “We need to decide if we are going to be a land of opportunities, or will we be just consumers of other people’s tech, the same we are right now” for digital software and services.
  • “We’re already finding excuses for the loss” of the AI market, he argues.

Malenica’s comments are extraordinarily demoralizing if you care about innovation. Now, I’m an American and one way to look at this dismal situation is that, by hobbling its own startups and existing AI innovators, Europe is doing the U.S. another favor by essentially taking itself out of the running in next great global tech race. Europe’s actions may also mean that America gains many of their best and brightest if they come to the U.S. when looking to create the next great algorithmic service or application because they can’t do so in the EU. This is exactly what happened over the past few decades for Internet startups, Malenica noted.

But that’s dismal news in another sense. Europe is filled with brilliant innovators, highly-skilled talent, world-class educational institutions, and even many venture capitalists looking to invest in this arena. Unfortunately, the continent’s suffocating regulatory approach makes it nearly impossible for digital technology innovators to have a fighting chance. Through their heavy-handed policies, European officials have essentially declared their innovators “guilty until proven innocent.” And that means that Europeans and the rest of the world are being deprived of many important life-enriching and life-saving AI applications that those innovators could create. Technological innovation is not a zero-sum game that only one country can “win.” Innovation drives growth and prosperity and lifts all boats as its benefits spread throughout the world. When European innovators prosper, people all over the world prosper along with them.

Is there any chance the European Commission softens its stance toward emerging technologies and looks to adopt a more flexible governance approach that instead treats AI innovators as innocent until proven guilty? I think it is extremely unlikely that will happen because, as Malenica noted, European technology policy is too rooted in fear of disruption and extreme risk-aversion. EU officials are forgetting that the most important lesson from the history of technological innovation is there can be no progress without some risk-taking and corresponding disruption. My favorite quote about the relationship between risk-taking and human progress comes from Wilbur Wright who, along with his brother, helped pioneer human flight. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” European policymakers are essentially forcing their best and brightest innovators to sit on the fence and watch the rest of the world fly right past them on the digital technology and AI front. The ramifications for the continent will be disastrous. Regardless, as I noted in concluding my recent Hill column, Europe’s approach to AI “shouldn’t be the model the U.S. follows if it hopes to maintain its early lead in AI and robotics. America should instead welcome European companies, workers and investors looking for a more hospitable place to launch bold new AI innovations.”

Alas, European officials appear ready to ignore the deleterious impact of their policies on innovation and competition and instead make regulation their leading export to the world. In fact, the European Commission will soon open a San Francisco office to work more closely with Silicon Valley companies affected by EU tech regulation. European leaders have basically surrendered on the idea of home-grown innovation and are now plowing all their energies into regulating the rest of the world’s largest digital technology companies, most of which are headquartered in the United States. It’s no wonder, then, that The Economist magazine concludes that, “Europe is the free-rider continent” that “has piggybacked on innovation from elsewhere, keeping up with rivals, not forging ahead.” Instead, “the cuddly form of capitalism embraced in Europe has markedly failed to create world-beating companies,” the magazine argues.

European officials want us to believe that they are somehow doing the world a favor by being its global tech regulator, when instead the are simply solidifying the power of the largest digital tech companies, who are the only ones with enough resources–mainly in the form of massive legal compliance teams–to live under the EU’s innovation-crushing regulations. Sadly, many US policymakers hate our own home-grown tech companies so much now, that they are willing to let this happen. In a better world, those American lawmakers would stand up to European officials looking to bully tech innovators and we would reject the innovation-killing recipe that the EU is cooking up for AI markets and expects the rest of the world to eat.


Additional Reading on AI & Robotics:

]]>
https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/feed/ 1 77016
Running List of My Research on AI, ML & Robotics Policy https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/ https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/#respond Fri, 29 Jul 2022 12:51:54 +0000 https://techliberation.com/?p=77020

[last updated 4/3/2025 – Check my Medium page for latest posts]

This a running list of all the essays and reports I’ve already rolled out on the governance of artificial intelligence (AI), machine learning (ML), and robotics. Why have I decided to spend so much time on this issue? Because this will become the most important technological revolution of our lifetimes. Every segment of the economy will be touched in some fashion by AI, ML, robotics, and the power of computational science. It should be equally clear that public policy will be radically transformed along the way.

Eventually, all policy will involve AI policy and computational considerations. As AI “eats the world,” it eats the world of public policy along with it. The stakes here are profound for individuals, economies, and nations. As a result, AI policy will be the most important technology policy fight of the next decade, and perhaps next quarter century. Those who are passionate about the freedom to innovate need to prepare to meet the challenge as proposals to regulate AI proliferate.

There are many socio-technical concerns surrounding algorithmic systems that deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions. And that’s the case that I’ll be dedicating my life to making in coming years.

Here’s the list of what I’ve done so far. I will continue to update this as new material is released:

2025

2024

2023

2022

2021 (and earlier)

]]>
https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/feed/ 0 77020
My Forthcoming Book on Artificial Intelligence & Robotics Policy https://techliberation.com/2022/07/22/my-forthcoming-book-on-artificial-intelligence-robotics-policy/ Fri, 22 Jul 2022 18:13:14 +0000 https://techliberation.com/?p=77014

I’m finishing up my next book, which is tentatively titled, “A Flexible Governance Framework for Artificial Intelligence.” I thought I’d offer a brief preview here in the hope of connecting with others who care about innovation in this space and are also interested in helping to address these policy issues going forward.

The goal of my book is to highlight the ways in which artificial intelligence (AI) machine learning (ML), robotics, and the power of computational science are set to transform the world—and the world of public policy—in profound ways. As with all my previous books and research products, my goal in this book includes both empirical and normative components. The first objective is to highlight the tensions between emerging technologies and the public policies that govern them. The second is to offer a defense of a specific governance stance toward emerging technologies intended to ensure we can enjoy the fruits of algorithmic innovation.

AI is a transformational technology that is general-purpose and dual-use. AI and ML also build on top of other important technologies—computing, microprocessors, the internet, high-speed broadband networks, and data storage/processing systems—and they will become the building blocks for a great many other innovations going forward. This means that, eventually, all policy will involve AI policy and computational considerations at some level. It will become the most important technology policy issue here and abroad going forward.

The global race for AI supremacy has important implications for competitive advantage and other geopolitical issues. This is why nations are focusing increasing attention on what they need to do to ensure they are prepared for this next major technological revolution. Public policy attitudes and defaults toward innovative activities will have an important influence on these outcomes.

In my book, I argue that, if the United States hopes to maintain a global leadership position in AI, ML, and robotics, public policy should be guided by two objectives:

  1. Maximize the potential for innovation, entrepreneurialism, investment, and worker opportunities by seeking to ensure that firms and other organizations are prepared to compete at a global scale for talent and capital and that the domestic workforce is properly prepared to meet the same global challenges.
  2. Develop a flexible governance framework to address various ethical concerns about AI development or use to ensure these technologies benefit humanity, but work to accomplish this goal without undermining the goals set forth in the first objective.

The book primarily addresses the second of these priorities because getting the governance framework for AI right significantly improves the chances of successfully accomplishing the first goal of ensuring that the United States remains a leading global AI innovator.

I do a deep dive into the many different governance challenges and policy proposals that are floating out there today—both domestically and internationally. The most contentious of these issues involved the so-called “socio-algorithmic” concerns that are driving calls for comprehensive regulation today. Those include the safety, security, privacy, and discrimination risks that AI/ML technologies could pose for individuals and society.

These concerns deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.

Getting the balance right requires agile governance strategies and decentralized, polycentric approaches. There are many different values and complex trade-offs in play in these debates, all of which demand tailored responses. But this should not be done in an overly rigid way through complicated, inflexible, time-consuming regulatory mandates that preemptively curtail or completely constrain innovation opportunities. There’s no need to worry about the future if we can’t even build it first. AI innovation must not be treated as guilty until proven innocent.

The more agile and adaptive governance approach I outline in my book builds on the core principles typically recommended by those favoring precautionary principle-based regulation. That is, it is similarly focused on (1) “baking in” best practices and aligning AI design with widely-shared goals and values; and, (2) keeping humans “in the loop” at critical stages of this process to ensure that they can continue to guide and occasionally realign those values and best practices as needed. However, a decentralized governance approach to AI focuses on accomplishing these objectives in a more flexible, evolutionary fashion without the costly baggage associated with precautionary principle-based regulatory regimes.

The key to the decentralized approach is a diverse toolkit of so-called soft law governance solutions. Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Precautionary regulatory restraints will be necessary in some limited circumstances—particular for certain types of very serious existential risk—but most AI innovations should be treated as innocent until proven guilty.

When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less of it. It is only through constant trial and error that humanity discovers better and safer ways of satisfying important wants and needs.

The book has six chapters currently, although I am toying with adding back in two other chapters (on labor market issues and industrial policy proposals) that I finished but then cut to keep the theme of the book more tightly focused on social and ethical considerations surrounding AI and robotics.

Here are the summaries of the current six chapters in the manuscript:

  • Chapter 1: Understanding AI & Its Potential Benefits – Defining the nature and scope of artificial intelligence and its many components and related subsectors is complicated and this fact creates many governance challenges. But getting AI governance right is vital because these technologies offer individuals and society meaningful improvements in living standards across multiple dimensions.
  • Chapter 2: The Importance of Policy Defaults for Innovation Culture – Every technology policy debate involves a choice between two general defaults: the precautionary principle and the proactionary principle or “permissionless innovation.” Setting the initial legal default for AI technologies closer to the green light of permissionless innovation will enable greater entrepreneurialism, investment, and global competitiveness.
  • Chapter 3: Decentralized Governance for AI: A Framework – The process of embedding ethics in AI design is an ongoing, iterative process influenced by many forces and factors. There will be much trial and error when devising ethical guidelines for AI and hammering out better ways of keeping these systems aligned with human values. A top-down, one-size-fits-all regulatory framework for AI is unwise. A more decentralized, polycentric governance approach is needed—nationally and globally. [This chapter is the meat of the book and several derivative articles will be spun out of it beginning with a report on algorithmic auditing and AI impact assessments.]
  • Chapter 4: The US Governance Model for AI So Far – U.S. digital technology and ecommerce sectors have enjoyed a generally “permissionless” policy environment since the early days of the Internet, and this has greatly benefited our innovation and global competitiveness. While AI has thus far been governed by a similar “light-touch” approach, many academics and policymakers are now calling for aggressive regulation of AI rooted in a precautionary principle-oriented mindset, which threatens to derail a great deal of AI innovation.
  • Chapter 5: The European Regulatory Model & the Costs of Precaution by Default – Over the past quarter century, the European Union has taken a more aggressive approach to digital technology and data regulation, and is now advancing several new comprehensive regulatory frameworks, including an AI Act. The E.U.’s heavy-handed regulatory regime, which is rooted in the precautionary principle, discouraged innovation and investment across the continent in the past and will continue to do so as it grows to encompass AI technologies. The U.S. should reject this model and welcome European innovators looking to escape it.
  • Chapter 6: Existential Risks & Global Governance Issues around AI & Robotics – AI and robotics could give rise to certain global risks that warrant greater attention and action. But policymakers must be careful to define existential risk properly and understand how it is often the case that the most important solution to such risks is more technological innovation to overcome those problems. The greatest existential risk of all would be to block further technological innovation and scientific progress. Proposals to impose global bans or regulatory agencies are both unwise and unworkable. Other approaches, including soft law efforts, will continue to play a role in addressing global AI risks and concerns.

This book, which I hope to have out some time later this year, grows out of a large body of research I’ve done over the past decade. [Some of that work is listed down below.] AI, ML, robotics, and algorithmic policy issues will dominate my research focus and outputs over the next few years.

I look forward to doing my small part to help ensure that America builds on the track record of success it has enjoyed with the Internet, ecommerce, and digital technologies. Again, that stunning success story was built on wise policy choices that promoted a culture of creativity and innovation and rejected calls to hold on to past technological, economic, or legal status quos.

Will America rise to the challenge once again by adopting wise policies to facilitate the next great technological revolution? I’m ready for that fight. I hope you are, too, because it will be the most important technology policy battle of our lifetimes.

___________

Recent Essays & Papers on AI & Robotics Policy

]]>
77014
America Shouldn’t Follow EU’s Lead on AI Regulation https://techliberation.com/2022/07/22/america-shouldnt-follow-eus-lead-on-ai-regulation/ https://techliberation.com/2022/07/22/america-shouldnt-follow-eus-lead-on-ai-regulation/#respond Fri, 22 Jul 2022 15:42:08 +0000 https://techliberation.com/?p=77012

For my latest regular column in The Hill, I took a look at the trade-offs associated with the EU’s AI Act. This is derived from a much longer chapter on European AI policy that is in my forthcoming book, and I also plan on turning it into a free-standing paper at some point soon. My oped begins as follows:

In the intensifying race for global competitiveness in artificial intelligence (AI), the United States, China and the European Union are vying to be the home of what could be the most important technological revolution of our lifetimes. AI governance proposals are also developing rapidly, with the EU proposing an aggressive regulatory approach to add to its already-onerous regulatory regime. It would be imprudent for the U.S. to adopt Europe’s more top-down regulatory model, however, which already decimated digital technology innovation in the past and now will do the same for AI. The key to competitive advantage in AI will be openness to entrepreneurialism, investment and talent, plus a flexible governance framework to address risks.

Jump over to The Hill to read the entire thing. And down below you will find all my recent writing on AI and robotics. This will be my primary research focus in coming years.

Additional Reading :

]]>
https://techliberation.com/2022/07/22/america-shouldnt-follow-eus-lead-on-ai-regulation/feed/ 0 77012
Again, We Should Not Ban All Teens from Social Media https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/ https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/#respond Wed, 06 Jul 2022 00:16:49 +0000 https://techliberation.com/?p=77004

A growing number of conservatives are calling for Big Government censorship of social media speech platforms. Censorship proposals are to conservatives what price controls are to radical leftists: completely outlandish, unworkable, and usually unconstitutional fantasies of controlling things that are ultimately much harder to control than they realize. And the costs of even trying to impose and enforce such extremist controls are always enormous.

Earlier this year, The Wall Street Journal ran a response I wrote to a proposal set forth by columnist Peggy Noonan in which she proposed banning everyone under 18 from all social-media sites (“We Can Protect Children and Keep the Internet Free,” Apr. 15). I expanded upon that letter in an essay here entitled, “Should All Kids Under 18 Be Banned from Social Media?” National Review also recently published an article penned by Christine Rosen in which she also proposes to “Ban Kids from Social Media.” And just this week, Zach Whiting of the Texas Public Policy Foundation published an essay on “Why Texas Should Ban Social Media for Minors.”

I’ll offer a few more thoughts here in addition to what I’ve already said elsewhere. First, here is my response to the Rosen essay. National Review gave me 250 words to respond to her proposal:

While admitting that “law is a blunt instrument for solving complicated social problems,” Christine Rosen (“Keep Them Offline,” June 27) nonetheless downplays the radicalness of her proposal to make all teenagers criminals for accessing the primary media platforms of their generation. She wants us to believe that allowing teens to use social media is the equivalent of letting them operate a vehicle, smoke tobacco, or drink alcohol. This is false equivalence. Being on a social-media site is not the same as operating two tons of steel and glass at speed or using mind-altering substances. Teens certainly face challenges and risks in any new media environment, but to believe that complex social pathologies did not exist before the Internet is folly. Echoing the same “lost generation” claims made by past critics who panicked over comic books and video games, Rosen asks, “Can we afford to lose another generation of children?” and suggests that only sweeping nanny-state controls can save the day. This cycle is apparently endless: Those “lost generations” grow up fine, only to claim it’s the  next generation that is doomed! Rosen casually dismisses free-speech concerns associated with mass-media criminalization, saying that her plan “would not require censorship.” Nothing could be further from the truth. Rosen’s prohibitionist proposal would deny teens the many routine and mostly beneficial interactions they have with their peers online every day. While she belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to be a better response than the repressive regulatory regime she would have Big Government impose on society.

I have a few more things to say beyond these brief comments.

First, as I alluded to in my short response to Rosen, we’ve heard similar “lost generation” stories before. Rosen might as well be channeling the ghost of Dr. Fredric Wertham (author of Seduction of the Innocent), who in the 1950s declared comics books a public health menace and lobbied lawmakers to restrict teen access to them, insisting such comics were “the cause of a psychological mutilation of children.” The same sort of “lost generation” predictions were commonplace in countless anti-video game screeds of the 1990s. Critics were writing books with titles like Stop Teaching Our Kids to Kill and referring to video games as “murder simulators,” Ironically, just as the video game panic was heating up, juvenile crime rates were plummeting. But that didn’t stop the pundits and policymakers from suggesting that an entire generation of so-called “vidiots” were headed for disaster. (See my 2019 short history: “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics“).

It is consistently astonishing to me how, as I noted in 2012 essay, “We Always Sell the Next Generation Short.” There seems to be a never-ending cycle of generational mistrust. “There has probably never been a generation since the Paleolithic that did not deplore the fecklessness of the next and worship a golden memory of the past,” notes Matt Ridley, author of The Rational Optimist.

For example, in 1948, the poet T. S. Eliot declared: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.” We’ve heard parents (and policymakers) make similar claims about every generation since then.

What’s going on here? Why does this cycle of generational pessimism and mistrust persist? In a 1992 journal article, the late journalism professor Margaret A. Blanchard offered this explanation:

“[P]arents and grandparents who lead the efforts to cleanse today’s society seem to forget that they survived alleged attacks on their morals by different media when they were children. Each generation’s adults either lose faith in the ability of their young people to do the same or they become convinced that the dangers facing the new generation are much more substantial than the ones they faced as children.”

In a 2009 book on culture, my colleague Tyler Cowen also noted how, “Parents, who are entrusted with human lives of their own making, bring their dearest feelings, years of time, and many thousands of dollars to their childrearing efforts.” Unsurprisingly, therefore, “they will react with extreme vigor against forces that counteract such an important part of their life program.” This explains why “the very same individuals tend to adopt cultural optimism when they are young, and cultural pessimism once they have children,” Cowen says.

Building on Blanchard and Cowen’s observation, I have explained how the most simple explanation for this phenomenon is that many parents and cultural critics have passed through their “adventure window.” The willingness of humans to try new things and experiment with new forms of culture—our “adventure window”—fades rapidly after certain key points in life, as we gradually settle in our ways. As the English satirist Douglas Adams once humorously noted: “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

There is no doubt social media can create or exacerbate certain social pathologies among youth. But pro-censorship conservatives wants to take the easy way out with a Big Government media ban for the ages.

Ultimately, it’s a solution that will not be effective. Raising children and mentoring youth is certainly the hardest task we face as adults because simple solutions rarely exist to complex human challenges–and the issues kids face are often particularly hard for many parents and adults to grapple with because we often fail to fully understand both the unique issues each generation might face, and we definitely fail to fully grasp the nature of each new medium that youth embrace.  Simplistic solution–even proposals for outright bans–will not work or solve serious problems.

An outright government ban on online platforms or digital devices is likely never going to happen due to First Amendment constraints, but even ignoring the jurisprudential barriers, bans won’t work for a reason that these conservatives never bother considering: Many parents will help their kids get access to those technologies and to evade restrictions on their use. Countless parents already do so in violation of COPPA rules, and not just because they worry that their kid won’t have access to what some other kids have. Rather, many parents (like me) both wanted to make sure I could more easily communicate with them, and also ensure that they could enjoy those technologies and use them to explore the world.

These conservatives might think some parents like me are monsters for allowing my (now grown) children to get on social media when they were teens. I wasn’t blind to the challenges, but recognized that sticking one’s head in the ground or hoping for divine intervention from the Nanny State was impractical and unwise. The hardest conversations I ever had with my kids were about the ugliness they sometimes experienced online, but those conversations were also countered by the many joys that I knew online interactions brought them. Shall I tell you about everything my son learned online before 13 about building model rockets or soapbox derby cars? Or the countless sites my daughter visited gathering ideas for her arts and crafts projects when, before the age of 13, she started hand-painting and selling jean jackets (eventually prompting her to pursue an art school degree)? Again, as I noted in my National Review response, Rosen’s prohibitionist proposal would deny teens these experiences and the countless other routine and entirely beneficial interactions that they have with their peers online every day.

There is simply no substitute for talking to your kids in the most open, understanding, and loving fashion possible. My #1 priority with my own children was not foreclosing all the new digital media platforms and devices at their disposal. That was going to be almost impossible. Other approaches are needed.

Yes, of course, the world can be an ugly place. I mean, have you ever watched the nightly news on television? It’s damn ugly. Shouldn’t we block youth access to it when scenes of war and violence are shown? Newspapers are full of ugliness, too. Should a kid be allowed to see the front page of the paper when it discusses or shows the aftermath of school shootings, acts of terrorism, or even just natural disasters? I could go on, but you get the point. And you could try to claim that somehow today’s social media environment is significantly worse for kids than the mass media of old, but you cannot prove it.

Of course you’ll have anecdotes, and many of them will again point to complex social pathologies. But I have entire shelves full of books on my office wall that made similar claims about the effects of books, the telephone, radio and television, comics, cable TV, every musical medium ever, video games, and advertising efforts across all these mediums. Hundreds upon hundreds of studies were done over the past half century about the effects of depictions of violence in movies, television, and video games. And endless court battles ensued.

In the end, nothing came out of it because the literature was inconclusive and frequently contradictory. After many years of panicking about youth and media violence, in 2020, the American Psychological Association issued a new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.”  But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA now says: “Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.”

This is exactly what we should expect to find true for youth and social media. Most of the serious scholars in the field already note studies and findings about youth and social media must be carefully evaluated and that many other factors need to be considered whenever evaluating claims about complex social phenomenon.

While Rosen belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to represent the best first-order response when compared to the repressive regulatory regime she would impose on society.

Finally, I want to just reiterate what I said in my brief  National Review response about the enormous challenges associated with mass criminalization or speech platforms. Rosen seems to image that all the costs and controversies will lie on the supply-side of social media. Just call for a ban and then magically all kids disappear from social media and the big evil tech capitalists eat all the costs and hassles. Nonsense. It’s the demand-side of criminalization efforts where the most serious costs lie. What do you really think kids are going to do if Uncle Sam suddenly does ban everyone under 18 from going on a “social media site,” whatever that very broad term entails? This will become another sad chapter in the history of Big Government prohibitionist efforts that fail miserably, but not before declaring mass groups of people criminals–this time including everyone under 18–and then trying to throw the book at them when they seek to avoid those repressive controls. There are better ways to address these problems than with such extremist proposals.


Additional Reading from Adam Thierer on Media & Content Regulation :

]]>
https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/feed/ 0 77004
VIDEO: My London Talk about the Future of AI Governance https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/ https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/#comments Mon, 13 Jun 2022 09:29:50 +0000 https://techliberation.com/?p=76999

On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:

  • What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
  • Which AI sectors are witnessing the most exciting forms of innovation currently?
  • What are the fundamental policy fault lines in the AI policy debates today?
  • Will fears about disruption and automation lead to a new Luddite movement?
  • How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
  • How did automation affect traditional jobs and sectors?
  • Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
  • How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
  • Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
  • What do we mean by “existential risk” as it pertains to artificial intelligence?

I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!

Additional Reading:

 

 

]]>
https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/feed/ 1 76999
Samuel Florman & the Continuing Battle over Technological Progress https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/ https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/#respond Wed, 06 Apr 2022 18:37:45 +0000 https://techliberation.com/?p=76961

Almost every argument against technological innovation and progress that we hear today was identified and debunked by Samuel C. Florman a half century ago. Few others since him have mounted a more powerful case for the importance of innovation to human flourishing than Florman did throughout his lifetime.

Chances are you’ve never heard of him, however. As prolific as he was, Florman did not command as much attention as the endless parade of tech critics whose apocalyptic predictions grabbed all the headlines. An engineer by training, Florman became concerned about the growing criticism of his profession throughout the 1960s and 70s. He pushed back against that impulse in a series of books over the next two decades, including most notably: The Existential Pleasures of Engineering (1976), Blaming Technology: The Irrational Search for Scapegoats (1981), and The Civilized Engineer (1987). He was also a prolific essayist, penning hundreds of articles for a wide variety of journals, magazines, and newspapers beginning in 1959. He was also a regular columnist for MIT Technology Review for sixteen years.

Florman’s primary mission in his books and many of those essays was to defend the engineering profession against attacks emanating from various corners. More broadly, as he noted in a short autobiography on his personal website, Florman was interested in discussing, “the relationship of technology to the general culture.”

Florman could be considered a “rational optimist,” to borrow Matt Ridley’s notable term [1] for those of us who believe, as I have summarized elsewhere, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment.[2] Rational optimists are highly pragmatic and base their optimism on facts and historical analysis, not on dogmatism or blind faith in any particular viewpoint, ideology, or gut feeling. But they are unified in the belief that technological change is a crucial component of moving the needle on progress and prosperity.

Florman’s unique contribution to advancing rational optimism came in the way he itemized the various claims made by tech critics and then powerfully debunked each one of them. He was providing other rational optimists with a blueprint for how defend technological innovation against its many critics and criticisms. As he argued in The Civilized Engineer, we need to “broaden our conception of engineering to include all technological creativity.”[3] And then we need to defend it with vigor.

In 1982, the American Society of Mechanical Engineers appropriately awarded Florman the distinguished Ralph Coats Roe Medal for his “outstanding contribution toward a better public understanding and appreciation of the engineer’s worth to contemporary society.” Carl Sagan had won the award the previous year. Alas, Forman never attained the same degree of notoriety as Sagan. That is a shame because Florman was as much a philosopher and a historian as he was an engineer, and his robust thinking on technology and society deserves far greater attention. More generally, his plain-spoken style and straight-forward defense of technological progress continues to be a model for how to counter today’s techno-pessimists.

This essay highlights some of the most important themes and arguments found in Florman’s writing and explains its continuing relevance to the ongoing battles over technology and progress.

What Motivates The “Antitechnologists”?

Florman was interested in answering questions about what motivates both engineers as well as their critics. He dug deep into psychology and history to figure out what makes these people tick. Who are engineers, and why do they do what they do? That was his primary question, and we will turn to his answers momentarily. But he also wanted to know what drove the technology critics to oppose innovation so vociferously.

Florman’s most important contribution to the history of ideas lies in his 6-part explanation of “the main themes that run through the works of the antitechnologists.”[4] Florman used the term “antitechnologists” to describe the many different critics of engineering and innovation. He recognized that the term wasn’t perfect and that some people he labelled as such would object to it. Nevertheless, because they offer no umbrella label for their movement or way of thinking, Florman noted that opposition to, or general discomfort with, technology was what motivated these critics. Hence, the label “antitechnologists.”

Florman surveyed a wide swath of technological critics from many different disciplines—philosophy, sociology, law, and other fields. He condensed their main criticisms into six general points:

  • Technology is a “thing” or a force that has escaped from human control and is spoiling our lives.
  • Technology forces man to do work that is tedious and degrading.
  • Technology forces man to consume things that he does not really desire.
  • Technology creates an elite class of technocrats, and so disenfranchises the masses.
  • Technology cripples man by cutting him off from the natural world in which he evolved.
  • Technology provides man with technical diversions which destroy his existential sense of his own being.[5]

No one else before this had ever crafted such a taxonomy of complaints from tech critics, and no one has done it better since Florman did so in 1976. In fact, it is astonishing how well Florman’s list continues to identify what motivates modern technology critics. New technologies have come and gone, but these same concerns tend to be brought up again and again. Florman’s books addressed and debunked each of these concerns in powerful fashion.

The Relentless Pessimism & Elitism of the Antitechnologists

Florman identified the way a persistent pessimism unifies antitechnologists. “Our intellectual journals are full of gloomy tracts that depict a society debased by technology,” he noted.[6] What motivated such gloom and doom? “It is fear. They are terrified by the scene unfolding before their eyes.”[7] He elaborated:

“The antitechnologists are frightened; they counsel halt and retreat. They tell the people that Satan (technology) is leading them astray, but the people have heard that story before. They will not stand still for vague promises of a psychic contentment that is to follow in the wake of voluntary temperance.”[8]

The antitechnologist’s worldview isn’t just relentlessly pessimistic but also highly elitist and paternalistic, Florman argued. He referred to it as “Platonic snobbery.”[9] The economist and political scientist Thomas Sowell would later call that snobbish attitude, “the vision of the anointed.”[10] Like Sowell, Florman was angered at the way critics stared down their noses at average folk and disregarded their values and choices:

“The antitechnologists have every right to be gloomy, and have a bounden duty to express their doubts about the direction our lives are taking. But their persistent disregard of the average person’s sentiments is a crucial weakness in their argument—particularly when they then ask us to consider the ‘real’ satisfactions that they claim ordinary people experienced in other cultures of other times.”[11]

Florman noted that critics commonly complain about “too many people wanting too many things,” but he noted that, “[t]his is not caused by technology; it is a consequence of the type of creature that man is.”[12] One can moralize all they want about supposed over-consumption or “conspicuous consumption,” but in the end, most of us strive to better our lives in various ways—including by working to attain things that may be out of our reach or even superfluous in the eyes of others.

For many antitechnologists and other social critics, only the noble search for truth and wisdom will suffice. Basically, everybody should just get back to studying philosophy, sociology, and other soft sciences. Modern tech critics, Forman said, fashion themselves as the intellectual descendants of Greek philosophers who believed that, “[t]he ideal of the new Athenian citizen was to care for his body in the gymnasium, reason his way to Truth in the academy, gossip in the agora, and debate in the senate. Technology was not deemed worthy of a free man’s time.”[13]

“It is not surprising to find philosophers recommending the study of philosophy as a way of life,” Florman noted amusingly.[14] But that does not mean all of us want (or even need) to devote our lives to such things. Nonetheless, critics often sneer at the choices made by the rest of us—especially when they involve the fruits of science and technology. “The most effective weapon in the arsenal of the antitechnologists is self-righteousness,” he noted,[15] and, “[a]s seen by the antitechnologists, engineers and scientists are half-men whose analysis and manipulation of the world deprives them of the emotional experiences that are the essence of the good life.”[16]

Indeed, it is not uncommon (both in the past and today) to see tech critics self-anoint themselves “humanists” and then suggest that anyone who thinks differently from them (namely, those who are pro-innovation) are the equivalent of anti-humanistic. I wrote about this in my 2018 essay, “Is It ‘Techno-Chauvinist’ & ‘Anti-Humanist’ to Believe in the Transformative Potential of Technology?” I argued that, “[p]roperly understood, ‘technology’ and technological innovation are simply extensions of our humanity and represent efforts to continuously improve the human condition. In that sense, humanism and technology are compliments, not opposites.”

But the critics remain fundamentally hostile to that notion and they often suggest that there is something suspicious about those who believe, along with Florman, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment. We rational optimists, the critics suggest, are simply too focused on crass, materialistic measures of happiness and human flourishing.

Florman observed this when noting how much grief he and fellow engineers and scientists got when engaging with critics. “Anyone who has attempted to defend technology against the reproaches of an avowed humanist soon discovers that beneath all the layers of reasoning—political, environmental, aesthetic, or moral—lies a deep-seated disdain for ‘the scientific view.’”[17]

Everywhere you look in the world of Science & Technology Studies (STS) today, you find this attitude at work. In fact, the field is perhaps better labelled Anti-Science & Technology Studies, or at least Science & Technology Skeptical Studies. For most STSers, the burden of proof lies squarely on scientists, engineers, and innovators who must prove to some (often undefined) higher authorities that their ideas and inventions will bring worth to society (however the critics measure worth and value, which is often very unclear). Until then, just go slow, the critics say. Better yet, consult your local philosophy department for a proper course of action!

The critics will retort that they are just looking out for society’s best interests and trying to counter that selfish, materialist side of humanity. Florman countered by noting how, “most people are in search of the good life—not ‘the goods life’ as [Lewis] Mumford puts it, although some goods are entailed—and most human desires are for good things in moderate amounts.”[18] Trying to better our lives through the creation and acquisition of new and better goods and services is just a natural and quite healthy human instinct to help us attain some ever-changing definition of whatever each of us considers “the good life.” “Something other than technology is responsible for people wanting to live in a house on a grassy plot beyond walking distance to job, market, neighbor, and school,” Florman responded.[19] We all want to “get ahead” and improve our lot in life. That’s not because technology forces the urge upon us. Rather, that urge comes quite naturally as part of a desire to improve our lot in life.

The Power of Nostalgia

I have spent a fair amount of time in my own writing documenting the central role that nostalgia plays in motivating technological criticism.[20] Florman’s books repeatedly highlighted this reality. “The antitechnologists romanticize the work of earlier times in an attempt to make it seem more appealing than work in a technological age,” he noted. “But their idyllic descriptions of peasant life do not ring true.”[21]

The funny thing is, it is hard to pin down the critics regarding exactly when the “golden era” or “good ‘ol days” were. But if there is one thing that they all agree on, it’s that those days have long passed us by. In a 2019 essay on “Four Flavors of Doom: A Taxonomy of Contemporary Pessimism,” philosopher Maarten Boudry noted:

“In the good old days, everything was better. Where once the world was whole and beautiful, now everything has gone to ruin. Different nostalgic thinkers locate their favorite Golden Age in different historical periods. Some yearn for a past that they were lucky enough to experience in their youth, while others locate utopia at a point farther back in time…”

Not all nostalgia is bad. Clay Routledge has written eloquently about how “nostalgia serves important psychological functions,” and can sometimes possess a positive character that strengthens individuals and society. But the nostalgia found in the works of tech critics is usually a different thing altogether. It is rooted in misery about the present and dread of the future—all because technology has apparently stolen away or destroyed all that was supposedly great about the past. Florman noted how, “the current pessimism about technology is a renewed manifestation of pastoralism,” that is typically rooted in historical revisionism about bygone eras.[22] Many critics engage in what rhetoricians call “appeals to nature” and wax poetic about the joys of life for Pre-Technological Man, who apparently enjoyed an idyllic life free of the annoying intrusions created by modern contrivances.

Such “good ol days” romanticism is largely untethered from reality. “For most of recorded history humanity lived on the brink of starvation,” Wall Street Journal columnist Greg Ip noted in a column in early 2019. Even a cursory review of history offers voluminous, unambiguous proof that the old days were, in reality, eras of abject misery. Widespread poverty, mass hunger, poor hygiene, disease, short lifespans, and so on were the norm. What lifted humanity up and improved our lot as a species is that we learned how to apply knowledge to tasks in a better way through incessant trial-and-error experimentation. Recent books by Hans Rosling,[23] Steven Pinker,[24] and many others[25] have thoroughly documented these improvements to human well-being over time.

The critics are unmoved by such evidence, preferring to just jump around in time and cherry-pick moments when they feel life was better than it is now. “Fond as they are of tribal and peasant life, the antitechnologists become positively euphoric over the Middle Ages,” Florman quipped.[26] Why? Mostly because the Middle Ages lacked the technological advances of modern times, which the critics loathe. But facts are pesky things, and as Florman insisted, “it is fair to go on to ask whether or not life was ‘better’ in these earlier cultures than it is in our own.”[27] “We all are moved to reverie by talk of an arcadian golden age,” he noted. “But when we awaken from this reverie, we realize that the antitechnologists have diverted us with half-truths and distortions.”[28]

The critics’ reverence for the old days would be humorous if it wasn’t rooted in an arrogant and dangerous belief that society can be somehow reshaped to resemble whatever preferred past the critics desire. “Recognizing that we cannot return to earlier times, the antitechnologists nevertheless would have us attempt to recapture the satisfactions of these vanished cultures,” Florman noted. “In order to do this, what is required is nothing less than a change in the nature of man.”[29] That is, the critics will insist that, “something must be done” (namely be forced from above via some grand design) to remake humans and discourage their inner homo faber desire to be an incessant tool-builder. But this is madness, Florman argued in one of the best passages from his work:

“we are beginning to realize that for mankind there will never be a time to rest at the top of the mountain. There will be no new arcadian age. There will always be new burdens, new problems, new failures, new beginnings. And the glory of man is to respond to his harsh fate with zest and ever-renewed effort.”[30]

If the critics had their way, however, that zest would be dampened and those efforts restrained in the name of recapturing some mythical lost age. This sort of “rosy retrospection bias” is all the more shocking coming, as it does, from learned people who should know a lot more about the actual history of our species and the long struggle to escape utter despair and destitution. Alas, as the great Scottish philosopher David Hume observed in a 1777 essay, “The humour of blaming the present, and admiring the past, is strongly rooted in human nature, and has an influence even on persons endued with the profoundest judgment and most extensive learning.”[31]

Why Invent? Homo Faber is our Nature

While taking on the critics and debunking their misplaced nostalgia about the past, Florman mounted a defense of engineers and innovators by noting that the need to tinker and create is in our blood. He began by noting how “the nature of engineering has been misconceived”[32] because, in a sense, we are all engineers and innovators to some degree.

Florman’s thinking was very much in line with Benjamin Franklin, who once noted, “man is a tool-making animal.” “Both genetically and culturally the engineering instinct has been nurtured within us,” Florman argued, and this instinct “was as old as the human race.”[33] “To be human is to be technological. When we are being technological we are being human—we are expressing the age-old desire of the tribe to survive and prosper.”[34] In fact, he claimed, it was no exaggeration to say that humans, “are driven to technological creativity because of instincts hardly less basic than hunger and sex.”[35] Had our past situation been as rosy as the critics sometimes suggest, perhaps we would have never bothered to fashion tools to escape those eras! It was precisely because humans wanted to improve their lives and the lives of their loved ones that we started crafting more and better tools. Flint and firewood were never going to suffice.

But our engineering instincts do not end with basic needs. “Engineering responds to impulses that go beyond mere survival: a craving for variety and new possibilities, a feeling for proportion—for beauty—that we share with the artist,” Florman argued.[36] In essence, engineering and innovation respond to both basic human needs and higher ones at every stage of “Maslow’s pyramid,” which describes a five-level hierarchy of human needs. This same theme is developed in Arthur Diamond’s recent book, Openness to Creative Destruction: Sustaining Innovative Dynamism. As Diamond argues, one of the most unheralded features of technological innovation is that, “by providing goods that are especially useful in pursuing a life plan full of challenging, worthwhile creative projects,” it allows each of us the pursue different conceptions of what we consider a good life.[37] But we are only able to do so by first satisfying our basic physiological needs, which innovation also handles for us.

Florman was frustrated that critics failed to understand this point and equally concerned that engineers and innovators had been cast as uncaring gadget-worshipers who did not see beauty and truth in higher arts and other more worldly goals and human values. That’s hogwash, he argued:

“What an ironic turn of events! For if ever there was a group dedicated to—obsessed with—morality, conscience, and social responsibility, it has been the engineering profession. Practically every description of the practice of engineering has stressed the concept of service to humanity.[38] [. . .] Even in an age of global affluence, the main existential pleasure of the engineer will always be to contribute to the well-being of his fellow man.”[39]

Engineers and innovators do not always set out with some grandiose design to change the world, although some aspire to do so. Rather, the “existential pleasures of engineering” that Florman described in the title of his most notable book comes about by solving practical day-to-day problems:

“The engineer does not find existential pleasure by seeking it frontally. It comes to him gratuitously, seeping into him unawares. He does not arise in the morning and say, ‘Today I shall find happiness.’ Quite the contrary. He arises and says, ‘Today I will do the work that needs to be done, the work for which I have been trained, the work which I want to do because in doing it I feel challenged and alive.’ Then happiness arrives mysteriously as a byproduct of his effort.”[40]

And this pleasure of getting practical work done is something that engineers and innovators enjoy collectively by coming together and using specialized skills in new and unique combinations. “[T]echnological progress depends upon a variety of skills and knowledge that are far beyond the capacity of any one individual,” he insisted. “High civilization requires a high degree of specialization, and it was toward high civilization that the human journey appears always to have been directed.”[41] Adam Smith could not have said it any better.

“Muddling Through”: Why Trial-and-Error is the Key to Progress

My favorite insights from Florman’s work relate to the way humans have repeatedly faced up to adversity and found ways to “muddle through.” This was the focus of an old essay of mine— “Muddling Through: How We Learn to Cope with Technological Change”—which argued that humans are a remarkably resilient species and that we regularly find creative ways to deal with major changes through constant trial-and-error experimentation and the learning that results from it.[42]

Florman made this same point far more eloquently long ago:

“We have been attempting to muddle along, acknowledging that we are selfish and foolish, and proceeding by means of trial and error. We call ourselves pragmatists. Mistakes are made, of course. Also, tastes change, so that what seemed desirable to one generation appears disagreeable to the next. But our overriding concern has been to make sure that matters of taste do not become matters of dogma, for that is the way toward violent conflict and tyranny. Trial and error, however, is exactly what the antitechnologists cannot abide.[43]

It is the error part of trial-and-error that is so vital to societal learning. “Even the most cautious engineer recognizes that risk is inherent in what he or she does,” Florman noted. “Over the long haul the improbable becomes the inevitable, and accidents will happen. The unanticipated will occur.”[44] But “[s]ometimes the only way to gain knowledge is by experiencing failure,” he correctly observed[45] “To be willing to learn through failure—failure that cannot be hidden—requires tenacity and courage.”[46]

I’ve argued that this represents the central dividing line between innovation supporters and technology critics. The critics are so focused on risk-adverse, precautionary principle-based thinking that they simply cannot tolerate the idea that society can learn more through trial-and-error than through preemptive planning. They imagine it is possible to override that process and predetermine the proper course of action to create a safer, more stable society. In this mindset, failure is to be avoided at all costs through prescriptions and prohibitions. Innovation is to be treated as guilty until proven innocent in the hope of eliminating the error (or risk / failure) associated with trial-and-error experiments. To reiterate, this logic misses the fact that the entire point of trial-and-error is to learn from our mistakes and “fail better” next time, until we’ve solved the problem at hand entirely.[47]

Florman noted that, “sensible people have agreed that there is no free lunch; there are only difficult choices, options, and trade-offs.”[48] In other words, precautionary controls come at a cost. “All we can do is do the best we can, plan where we can, agree where we can, and compromise where we must,” he said.[49] But, again, the antitechnologists absolutely cannot accept this worldview. They are fundamentally hostile to it because they either believe that a precautionary approach will do a better job improving public welfare, or they believe that trial-and-error fails to safeguard any number of other values or institutions that they regard as sacrosanct. This shuts down the learning process from which wisdom is generated. As the old adage goes, “nothing ventured, nothing gained.” There can be no reward without some risk, and there can be no human advances without unless we are free to learn from the error portion of trial-and-error.

The Costs of Precautionary Regulation

Florman did not spend much time in his writing mulling over the finer points of public policy, but he did express skepticism about our collective ability to define and enforce “the public interest” in various contexts. A great many regulatory regimes—and their underlying statutes—rest on the notion of “protecting the public interest.” It is impossible to be against that notion, but it is often equally impossible to define what it even means.[50]

This leads to what Florman called, “the search for virtues that nobody can define”[51] “As engineers we are agreed that the public interest is very important; but it is folly to think that we can agree on what the public interest is. We cannot even agree on the scientific facts!”[52] This is especially true today in debates over what constitutes “responsible innovation” or “ethical innovation.”[53] What Florman noted about such conversations three decades ago is equally true today:

“Whenever engineering ethics is on the agenda, emotions come quickly to a boil. […] “It is oh so easy to mouth clichés, for example to pledge to protect the public interest, as the various codes of engineering ethics do. But such a pledge is only a beginning and hardly that. The real questions remain: What is the public interest, and how is it to be served?”[54]

That reality makes it extremely difficult to formulate consensus regarding public polices for emerging technologies. And it makes it particularly difficult to define and enforce a “precautionary principle” for emerging technologies that will somehow strike the Goldilocks balance of getting things just right. This was the focus of my 2016 book Permissionless Innovation, which argued that the precautionary principle should be the last resort when contemplating innovation policy. Experimentation with new technologies and business models should generally be permitted by default because, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about,” I argued. The precautionary principle should only be tapped when the harms alleged to be associated with a new technology are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.

For his part, Florman did not want to get his defense of engineering mixed up with politics and regulatory considerations. Engineers and technologists, he noted, come in many flavors and supported many different causes. Generally speaking, they tend to be quite pragmatic and shun strong ideological leanings and political pronouncements.

Of course, at some point, there is no avoiding this fight; one must comment on how to strike the right balance when politics enter the picture and threatens to stifle technological creativity. Florman’s perspectives on regulatory policy were somewhat jumbled, however. On one hand, he expressed concern about excessive and misguided regulations, but he also saw government playing an important role both in supporting various types of engineering projects and regulating certain technological developments:

“The regulatory impulse, running wild, wreaks havoc, first of all by stifling creative and productive forces that are vital to national survival. But it does harm also—and perhaps more ominously—by fomenting a counter-revolution among outraged industrialists, the intensity of which threatens to sweep away many of the very regulations we most need.”[55]

In his 1987 book, The Civilized Engineer, Florman even expressed surprise and regret about growing pushback against regulation during the Reagan years. He also expressed skepticism about “the deceptive allure” of benefit-cost analysis, which was on the rise at the time, saying that the “attempt to apply mathematical consistency to the regulatory process was deplorably simplistic.”[56] I have always been a big believer in the importance of benefit-cost analysis (BCA), so I was surprised to read of Florman’s skepticism of it. But he was writing in the early days of BCA and it was not entirely clear how well it work in practice. Four decades on, BCA has become far more rigorous, academically respected, and well-established throughout government. It has widespread and bipartisan support as a policy evaluation tool.

Florman adamantly opposed any sort of “technocracy”—or administration of government by technically-skilled elites. He thought it was silly that so many tech critics believe that such a thing already existed. “The myth of the technocratic elite is an expression of fear, like a fairy tale about ogres,” he argued. “It springs from an understandable apprehension, but since it has no basis in reality, it has no place in serious discourse.”[57] Nor did he believe that there was any real chance a technocracy would ever take hold. “No matter how complex technology becomes, and no matter how important it turns out to be in human affairs, we are not likely to see authority vested in a class of technocrats.”[58]

Florman hoped for wiser administration of law and regulations that affected engineering endeavors and innovation more generally. Like so many others, he did not necessarily want more law, just better law. One cannot fault that instinct, but Florman was not really interested in fleshing out the finer details of policy about how to accomplish that objective. He preferred instead to use history as a rough guide for policy. From the fall of the Roman Empire to the decline of Britain’s economic might in more recent times, Florman observed the ways in which societal and governmental attitudes toward innovation influenced the relative growth of science, technology, and national economies. In essence, he was explaining how “innovation culture” and “innovation arbitrage” had been realities for far longer than most people realize.[59]

“Where the entrepreneurial spirit cannot be rewarded, and where non-productive workers cannot be discharged, stagnation will set in,” Florman concluded.[60] This is very much in line with the thinking of economic historians like Joel Mokyr[61] and Deirdre McCloskey,[62] who have identified how attitudes toward creativity and entrepreneurialism affect the aggregate innovative capacity of nations, and thus their competitive advantage and relative prosperity in the world.

Debunking Determinism, Anxiety & Alienation Concerns

One of the ironies of modern technological criticism is the way many critics can’t seem to get their story straight when it comes to “technological determinism” versus social determinism. In the extreme view, technological determinism is the idea that technology drives history and almost has a will of its own. It is like an autonomous force that is practically unstoppable. By contrast, social determinism means that society (individuals, institutions, etc.) guide and control the development of technology.

In the field of Science and Technology Studies, technological determinism is a very hot matter. Academic and social critics are fond of painting innovation advocates as rigid tech determinists who are little better than uncaring anti-humanistic gadget-worshipers. The critics have employed a variety of other creative labels to describe tech determinism, including: “techno-fundamentalism,” “technological solutionism,” and even “techno-chauvinism.”

Engineers and other innovators often get hit with such labels and accused of being rigid technological determinists who just want to see tech plow over people and politics. But this was, and remains, a ridiculous argument. Sure, there will always be some wild-eyed futurists and extropian extremists who make preposterous claims about how “there is no stopping technology.” “Even now the salvation-through-technology doctrine has some adherents whose absurdities have helped to inspire the antitechnological movement, Florman said.”[63] But that hardly represents the majority of innovation supporters, who well understand that society and politics play a crucial role in shaping the future course of technological development.

As Florman noted, we can dismiss extreme deterministic perspectives for a rather simple reason: technologies fail all the time! “If promising technologies can suffer fatal blows from unexpected circumstances,” Florman correctly argued, then “[t]his means that we are still—however precariously—in control of our own destiny.”[64] He believed that, “technology is not an independent force, much less a thing, but merely one of the types of activities in which people engage.”[65] The rigid view of tech determinism can be dismissed, he said, because “it can be shown that technology is still very much under society’s control, that it is in fact an expression of our very human desires, fancies, and fears.”[66]

But what is amazing about this debate is that some of the most rigid technological determinists are the technology critics themselves! Recall how Florman began his 6-part taxonomy of common complaints from tech critics. “A primary characteristic of the antitechnologists,” Florman argued, “is the way in which they refer to ‘technology’ as a thing, or at least a force, as if it had an existence of its own” and which “has escaped from human control and is spoiling our lives.”[67]

He noted that many of the leading tech critics of the post-war era often spoke in remarkably deterministic ways. “The idea that a man of the masses has no thoughts of his own, but is something on the order of a programmed machine, owes part of its popularity with the antitechnologists to the influential writings of Herbert Marcuse,” he believed.[68] But then such thinking accelerated and gained greater favor with the popularity of critics like French philosopher Jacques Ellul, American historian Lewis Mumford, and American cultural critic Neil Postman.

Their books painted a dismal portrait of a future in which humans were subjugated to the evils of “technique” (Ellul), “technics” (Mumford), or “technopoly” (Postman).  The narrative of their works read like dystopian science fiction. Essentially, there was no escaping the iron grip that technology had on us. Postman claimed, for example, that technology was destined to destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.”

Which gets us to commonly heard concerns about how technology leads to “anxiety” and “alienation.” “Having established the view of technology as an evil force, the antitechnologists then proceed to depict the average citizen as a helpless slave, driven by this force to perform work he detests,” Florman notes.[69] “Anxiety and alienation are the watchwords of the day, as if material comforts made life worse, rather than better.”[70]

These concerns about anxiety, alienation, and “dehumanization” are omnipresent in the work of modern tech critics, and they are also tied up with traditional worries about “conspicuous consumption.” It’s all part of the “false consciousness” narrative they also peddle, which basically views humans as too ignorant to look out for their own good. In this worldview, people are sheep being led to the slaughter by conniving capitalists and tech innovators, who are just trying to sell them things they don’t really need.

Florman pointed out how preposterous this line of thinking is when he noted how critics seem to always forget that, “a basic human impulse precedes and underlies each technological development”:[71]

“Very often this impulse, or desire, is directly responsible for the new invention. But even when this is not the case, even when the invention is not a response to any particular consumer demand, the impulse is alive and at the ready, sniffing about like a mouse in a maze, seeking its fulfillment. We may regret having some of these impulses. We certainly regret giving expression to some of them. But this hardly gives us the right to blame our misfortunes on a devil external to ourselves.”[72]

Consider the automobile, for example. Industrial era critics often focused on it and lambasted the way they thought industrialists pushed auto culture and technologies on the masses. Did we really need all those cars? All those colors? All those options? Did we really even need cars? The critics wanted us to believe that all these things were just imposed upon us. We were being force-fed options we really didn’t even need or want. “Choice” in this worldview is just a fiction; a front for the nefarious ends of our corporate overlords.

Florman demolished this reasoning throughout his books. “However much we deplore the growth of our automobile culture, clearly it has been created by people making choices, not by a runaway technology,” he argued.[73] Consumer demand and choice is not some fiction fabricated and forced upon us, as the antitechnologists suggest. We make decisions. “Those who would blame all of life’s problems on an amorphous technology, inevitably reject the concept of individual responsibility,” Florman retorted. “This is not humanism. It is a perversion of the humanistic impulse.”[74]

A modern tweak on the conspicuous consumption and false consciousness arguments is found in the work of leading tech critics like Evgeny Morozov, who pens attention-grabbing screeds decrying what he regards as “the folly of technological solutionism.” Morozov bluntly states that “our enemy is the romantic and revolutionary problem solver who resides within” of us, but most specifically within the engineers and technologists.[75]

But would the world really be better place it tinkerers didn’t try to scratch that itch?[76] In 2021, the Wall Street Journal profiled JoeBen Bevirt, an engineer and serial entrepreneur who has been working to bring flying cars from sci-fi to reality. Channeling Florman’s defense of the existential pleasures associated with engineering, Bevirt spoke passionately about the way innovators can help “move our species forward” through their constant tinkering to find solutions to hard problems. “That’s kind of the ethos of who we are,” he said. “We see problems, we’re engineers, we work to try to fix them.”[77]

When tech critics like Morozov decry “solutionism,” they are essentially saying that innovators like Bevirt need to just shut up and sit down. Don’t try to improve the world through tinkering; just settle for the status quo, the critics basically state. That’s the kiss of death for human progress, however, because it is only through incessant experimentation with the new and different approaches to hard problems that we can advance human well-being. “Solutionism” isn’t about just creating some shiny new toy; it’s about expanding the universe of potentially life-enriching and life-saving technologies available to humanity.

Conclusion

This review of Samuel Florman’s work may seem comprehensive, but it only scratches the surface of his wide-ranging writing. Florman was troubled that engineering lacked support or at least understanding. Perhaps that was because, he reasoned, that “[t]here is no single truth that embodies the practice of engineering, no patron saint, no motto or simple credo. There is no unique methodology that has been distilled from millenia of technological effort.”  Or, more simply, it may also be the case that the profession lacked articulate defenders. “The engineer may merely be waiting for his Shakespeare,” he suggested.[78]

Through his life’s work, however, Samuel Florman became that Shakespeare; the great bard of engineering and passionate defender of technological innovation and rational optimism more generally. In looking for a quote or two to close out my latest book, I ended with this one from Florman:

“By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privation. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business.”[79]

Let us resolve to make sure that Florman’s greatest fear does not come to pass. Let us resolve to make sure that the great human adventure never ends. And let us resolve to counter the antitechnologists and their fundamentally anti-humanist worldview, which would most assuredly make our existence the “dull business” that Florman dreaded.

We can do better when we put our minds and hands to work innovating in an attempt to build a better future for humanity. Samuel Florman, the great prophet of progress, showed us the way forward.

 

Additional Reading from Adam Thierer:

 

Endnotes:

[1]    Matt Ridley, The Rational Optimist: How Prosperity Evolves (New York: Harper Collins, 2010).

[2]    Adam Thierer, “Defending Innovation Against Attacks from All Sides,” Discourse, November 9, 2021, https://www.discoursemagazine.com/ideas/2021/11/09/defending-innovation-against-attacks-from-all-sides.

[3]    Samuel C. Forman, The Civilized Engineer (New York: St. Martin’s Griffin, 1987), p. 26.

[4]    Samuel C. Florman, The Existential Pleasures of Engineering (New York, St. Martins Griffin, 2nd Edition, 1994), p. 53-4.

[5]    Existential Pleasures of Engineering, p. 53-4.

[6]    Samuel C. Florman, Blaming Technology: The Irrational Search for Scapegoats (New York: St. Martin’s Press, 1981), p. 186.

[7]    Existential Pleasures of Engineering, p. 76.

[8]    Existential Pleasures of Engineering, p. 77.

[9]    The Civilized Engineer, p. 38.

[10]   Thomas Sowell, The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy (New York: Basic Books, 1995).

[11]   Existential Pleasures of Engineering, p. 72.

[12]   Existential Pleasures of Engineering, p. 76.

[13]   The Civilized Engineer, p. 35.

[14]   Existential Pleasures of Engineering, p. 102.

[15]   Blaming Technology, p. 162.

[16]   Existential Pleasures of Engineering, p. 55.

[17]   Blaming Technology, p. 70.

[18]   Existential Pleasures of Engineering, p. 77.

[19]   Existential Pleasures of Engineering, p. 60.

[20]   Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology 14, no. 1 (2013), p. 312–50, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2012494.

[21]   Existential Pleasures of Engineering, p. 62.

[22]   Blaming Technology, p. 9.

[23]   Hans Rosling, Factfulness: Ten Reasons We’re Wrong about the World—and Why Things Are Better Than You Think (New York: Flatiron Books, 2018).

[24]   Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking, 2018).

[25]   Gregg Easterbrook, It’s Better than It Looks: Reasons for Optimism in an Age of Fear (New York: Public Affairs, 2018); Michael A. Cohen & Micah Zenko, Clear and Present Safety: The World Has Never Been Better and Why That Matters to Americans (New Haven, CT: Yale University Press, 2019).

[26]   Existential Pleasures of Engineering, p. 54.

[27]   Existential Pleasures of Engineering, p. 72.

[28]   Existential Pleasures of Engineering, p. 72.

[29]   Existential Pleasures of Engineering, p. 55.

[30]   Existential Pleasures of Engineering, p. 117.

[31]   David Hume, “Of the Populousness of Ancient Nations,” (1777), https://oll.libertyfund.org/titles/hume-essays-moral-political-literary-lf-ed.

[32]   The Civilized Engineer, p. 20.

[33]   Existential Pleasures of Engineering, p. 6.

[34]   The Civilized Engineer, p. 20.

[35]   Existential Pleasures of Engineering, p. 115.

[36]   The Civilized Engineer, p. 20.

[37]   Arthur Diamond, Openness to Creative Destruction: Sustaining Innovative Dynamism (Oxford: Oxford University Press, 2019).

[38]   Existential Pleasures of Engineering, p. 19.

[39]   Existential Pleasures of Engineering, p. 147.

[40]   Existential Pleasures of Engineering, p. 148.

[41]   The Civilized Engineer, p. 30.

[42]   Adam Thierer, “Muddling Through: How We Learn to Cope with Technological Change,” Medium, June 30, 2014, https://medium.com/tech-liberation/muddling-through-how-we-learn-to-cope-with-technological-change-6282d0d342a6.

[43]   Existential Pleasures of Engineering, p. 84.

[44]   The Civilized Engineer, p. 71.

[45]   The Civilized Engineer, p. 72.

[46]   The Civilized Engineer, p. 72.

[47]   Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[48]   The Civilized Engineer, p. xi.

[49]   Existential Pleasures of Engineering, p. 85.

[50]   Adam Thierer, “Is the Public Served by the Public Interest Standard?” The Freeman, September 1, 1996,  https://fee.org/articles/is-the-public-served-by-the-public-interest-standard.

[51]   The Civilized Engineer, p. 84.

[52]   The Existential Pleasures of Engineering, p. 22.

[53]   Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[54]   The Civilized Engineer, p. 79.

[55]   Blaming Technology, p. 106.

[56]   The Civilized Engineer, p. 158.

[57]   Blaming Technology, p. 41.

[58]   Blaming Technology, p. 40-1.

[59]   Adam Thierer, “Embracing a Culture of Permissionless Innovation,” Cato Online Forum, November 17, 2014, https://www.cato.org/publications/cato-online-forum/embracing-culture-permissionless-innovation; Christopher Koopman, “Creating an Environment for Permissionless Innovation,” Testimony before the US Congress Joint Economic Committee, May 22, 2018, https://www.mercatus.org/publications/creating-environment-permissionless-innovation.

[60]   The Civilized Engineer, p. 117.

[61]   Joel Mokyr, Lever of Riches: Technological Creativity and Economic Progress (New York: Oxford University Press, 1990).

[62]   Deirdre N. McCloskey, The Bourgeois Virtues: Ethics for an Age of Commerce (Chicago: The University of Chicago Press, 2006); Deirdre N. McCloskey, Bourgeois Dignity: Why Economics Can’t Explain the Modern World (Chicago: The University of Chicago Press. 2010).

[63]   Existential Pleasures of Engineering, p. 57.

[64]   Blaming Technology, p. 22.

[65]   The Existential Pleasures of Engineering, p. 58.

[66]   Blaming Technology, p. 10.

[67]   The Existential Pleasures of Engineering, p. 48, 53.

[68]   Existential Pleasures of Engineering, p. 70.

[69]   Existential Pleasures of Engineering, p. 49.

[70]   Existential Pleasures of Engineering, p. 16.

[71]   Existential Pleasures of Engineering, p. 61.

[72]   Existential Pleasures of Engineering, p. 61.

[73]   Existential Pleasures of Engineering, p. 60.

[74]   Blaming Technology, p. 104.

[75]   Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013).

[76]   Adam Thierer, “A Net Skeptic’s Conservative Manifesto,” Reason, April 27, 2013, https://reason.com/2013/04/27/a-net-skeptics-conservative-manifesto-2/.

[77]   Emily Bobrow, “JoeBen Bevirt Is Bringing Flying Taxis from Sci-Fi to Reality,” Wall Street Journal, July 9, 2021, https://www.wsj.com/articles/joeben-bevirt-is-bringing-flying-taxis-from-sci-fi-to-reality-11625848177.

[78]   Existential Pleasures of Engineering, p. 96.

[79]   Blaming Technology, p. 193.

]]>
https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/feed/ 0 76961
The Precautionary Principle: A Plea for Proportionality https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/ https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/#respond Mon, 07 Feb 2022 19:57:03 +0000 https://techliberation.com/?p=76949

Gabrielle Bauer, a Toronto-based medical writer, has just published one of the most concise explanations of what’s wrong with the precautionary principle that I have ever read. The precautionary principle, you will recall, generally refers to public policies that limit or even prohibit trial-and-error experimentation and risk-taking. Innovations are restricted until their creators can prove that they will not cause any harms or disruptions. In an essay for The New Atlantis entitled, “Danger: Caution Ahead,” Bauer uses the world’s recent experiences with COVID lockdowns as the backdrop for how society can sometimes take extreme caution too far, and create more serious dangers in the process. “The phrase ‘abundance of caution’ captures the precautionary principle in a more literary way,” Bauer notes. Indeed, another way to look at it is through the prism of the old saying, “better to be safe than sorry.” The problem, she correctly observes, is that, “extreme caution comes at a cost.” This is exactly right and it points to the profound trade-offs associated with precautionary principle thinking in practice.

In my own writing about the problems associated with the precautionary principle (see list of essays at bottom), I often like to paraphrase an ancient nugget of wisdom from St. Thomas Aquinas, who once noted in his Summa Theologica that, if the highest aim of a captain were merely to preserve their ship, then they would simply keep it in port forever. Of course, that is not the only goal of a captain has. The safety of the vessel and the crew is essential, of course, but captains brave the high seas because there are good reasons to take such risks. Most obviously, it might be how they make their living. But historically, captains have also taken to the seas as pioneering explorers, researchers, or even just thrill-seekers.

This was equally true when humans first decided to take to the air in balloons, blimps, airplanes, and rockets. A strict application of the precautionary principle would have instead told us we should keep our feet on the ground. Better to be safe than sorry! Thankfully, many brave souls ignored that advice and took the heavens in the spirit of exploration and adventure. As Wilbur Wright once famously said, “If you are looking for perfect safety, you would do well to sit on a fence and watch the birds.” Needless to say, humans would have never mastered the skies if the Wright brothers (and many others) had not gotten off the fence and taken the risks they did.

Opportunity Costs Matter

Here we get to the true danger of strict versions of the precautionary principle: It essentially becomes a crime to get off the fence and do anything risky at all. This sets up the potential for stasis and stagnation as societal learning is severely curtailed. Progress becomes harder because there can be no reward without some risk. — both individually or societally. “Caution makes sense except when it doesn’t,” Bauer notes. She continues on to note:

Used too liberally, the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.

As I argued in a book on these issues, the root problem with precautionary principle thinking is that “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.” If societal attitudes and public policy will not tolerate the idea of any error resulting from experimentation with new and better ways of doing things, then we will obviously not get many new and better things! Scientist Martin Rees refers to this truism about the precautionary principle as “the hidden cost of saying no.”

The opportunity cost of inaction or stasis can be hard to quantify but imagine if we organized our entire society around a rigid application of the precautionary principle. Bauer notes that this is basically what we did during COVID. And the results are in. “It’s far past time we ask ourselves when  abundance really means excess, when our precautionary measures against Covid have gone too far, when we have ignored the costs and lost all sense of proportionality.” Unfortunately, the precautionary mindset–which is always rooted in fear of the unknown–took control. As Bauer notes:

It should have been socially acceptable to debate the merits of these tradeoffs, with nuance and without censure. But that is not what happened. Early in the pandemic, an unspoken rule — thou shalt not question the costs — sprang up and stifled discourse.

“And here’s the worst of it: the costs of excess caution can persist long after the initial danger has passed,” she notes. “It’s no different with Covid: our knee-jerk caution may have downstream effects that persist after the virus has ceased to be a threat.” She cites many compelling examples of the negative effects associated with extreme precautionary thinking during COVID, noting how, “[t]he impact of travel and trade restrictions on food security and childhood vaccination in developing countries will likely reverberate for decades.” Moreover:

The Covid-19 pandemic has laid bare the risks of extreme protection: lost businesses, lost livelihoods, lost graduations, lost loves, lost goodbyes; the loss of personal agency over life’s most intimate and meaningful moments; the loss, quite possibly, of our cherished principles of liberal democracy. A recent report by International IDEA, a democracy advocacy organization, concluded that many countries had become more authoritarian as they took steps to contain the pandemic.

This list of lockdown trade-offs goes on and the aggregate costs will be staggering once economists and others get around to better estimating them. As noted, gauging those costs will be challenging because of the many variables and values that come into play. But it remains vital that society takes risk analysis and trade-offs more seriously so that we don’t make these mistakes again and again.

Proportionality is the Key

Toward that end, Bauer makes “a plea for proportionality.” She wants society to strike a more reasonable balance when it comes to policy measures that might block actions and research that could help us better understand how to deal with risk uncertainties. Accordingly, “we must understand when to apply the precautionary principle and when to move on from it.”

“The precautionary principle doesn’t come with such checks and balances. On the contrary, it tends to perpetuate itself and acquire a life of its own,” she notes. In other words, once set in place initially for a given issue or sector, precautionary principle thinking tends to grow like bad weeds until it has taken over everything in sight. (To see the consequences of that in fields like aviation, space, nanotech, and others, please check out J. Storrs Hall’s amazing new book, Where Is My Flying Car?)

Of course, proportionality cuts both ways, and as I noted in my last two books, there are some instances in which at least a light version of the precautionary principle should be preemptively applied, but they are limited to scenarios where the threat in question is tangible, immediate, irreversible, and catastrophic in nature. In such cases, I argue, society might be better suited thinking about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria. Generally speaking, however, this test is not satisfied in the vast majority of cases. “Innovation Allowed” should be our default principle. 

Conclusion

The single most important thing that we must always remember when debating precautionary principle-based policies is that, just because someone has good intentions and claims safety as their goal, that does not automatically make the world a safer place. To repeat: Excessive safety-related measure can result in less safety overall. Or again, as Bauer says, “extreme caution comes at a cost.”

No one ever summarized this truism more clearly than the great political scientist Aaron Wildavsky, who devoted much of his life’s work to proving how efforts to create a risk-free society would instead lead to an extremely unsafe society. In his 1988 book, Searching for Safety, Wildavsky warned of the dangers of “trial without error” reasoning, and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. He argued that wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. Here was the crucial takeaway:

The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.

Trial and error is the basis of all societal learning, and without it, humanity will be less safe and less prosperous over the long run. Gabrielle Bauer’s new essay captures that insight better than anything I’ve read since Wildavsky was writing about the dangers of the precautionary principle. I beg you to jump over to New Atlantis and read her entire article. It’s absolutely essential.


Additional reading from Adam Thierer on the precautionary principle

]]>
https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/feed/ 0 76949
How a Section 230 Repeal Could Mean ‘Game Over’ for the Gaming Community https://techliberation.com/2021/06/25/how-a-section-230-repeal-could-mean-game-over-for-the-gaming-community/ https://techliberation.com/2021/06/25/how-a-section-230-repeal-could-mean-game-over-for-the-gaming-community/#respond Fri, 25 Jun 2021 13:22:43 +0000 https://techliberation.com/?p=76888

By: Jennifer Huddleston and Juan Martin Londoño

This year the E3 conference streamed live over Twitch, YouTube, and other online platforms—a reality that highlights the growing importance of platforms and user-generated content to the gaming industry. From streaming content on Twitch, to sharing mods on Steam Workshop, or funding small developing studios on services such as Patreon or Kickstarter, user-generated content has proven vital for the gaming ecosystem. While these platforms have allowed space for creative interaction—which we saw on the livestreams chats during E3—the legal framework that allows all of this interaction is under threat, and changes to a critical internet law could spell Game Over for user-created gaming elements.

 

This law, “Section 230,” is foundational to all user-generated content on the internet. Section 230 protects platforms from lawsuits over both the content they host as well as their moderation decisions, giving them the freedom to curate and create the kind of environment that best fits its customers. This policy is under attack, however, from policymakers on both sides of the aisle. Some Democrats argue platforms are not moderating enough content, thus allowing hate speech and voter suppression to thrive, while some Republicans believe platforms are moderating too much, which promotes “cancel culture” and the limitation of free speech.

 

User-generated content and the platforms that host it have contributed significantly to the growth of the gaming industry since the early days of the internet. This growth has only accelerated during the pandemic, as in 2020 the gaming industry grew 20 percent to a whopping $180 billion market. But changing Section 230 could seriously disrupt user-generated engagement with gaming, making content moderation costlier and riskier for some of gamers’ favorite platforms.

An increased legal liability could mean a platform such as Twitch would face higher compliance costs due to the need to increase its moderation and legal teams. This cost would likely be transferred to creators through a revenue reduction or to viewers through rate hikes—resulting in less content and fewer users. Further, restrictions on moderation could lead to undesirable content and ultimately fewer users and advertisers—leading to more profit losses and less content. Ultimately, platforms might not be able to sustain themselves, leading to fewer platforms and opportunities for fans to engage. Platforms such as Twitch already face these problems, but for now they can determine the best solutions without heavy-handed government intervention or costly legal battles.

 

The impact of changing Section 230 goes beyond video content and could impact some increasingly popular fan creations that are further invigorating the industry. For example, the modding community, composed of gaming fans that modify existing games to create new experiences, often uses various online platforms to share their mods with other players. Modding has kept certain games relevant even years after their release, or propelled games’ popularity by introducing new ways to play them. Such is the case of Grand Theft Auto V’s roleplaying mod, or Arma III’s PlayerUnknown Battlegrounds mod, the inspiration of games such as Fortnite and Call of Duty: Warzone.

 

These modified games are often hosted on platforms such as Steam Workshop, Github, or on independently run community websites. These platforms are often free of charge, either as a complimentary service of a bigger product – in the case of Steam – or are supported purely by ad revenue and donations. Like streaming platforms and message boards, without Section 230 these services would face increased compliance costs or be unable to remove excessively violent, sexually explicit, or hateful content. The result could be that these new twists on old favorites never make it to consumers, as platforms are unable to host these creations and remain viable as businesses.

 

Changing or removing Section 230 protections would upend the complex and dynamic gaming environment on display during E3. It took decades of growth for gaming to establish itself as the new king of entertainment and it has defended itself from a variety of technopanics throughout the years. Pulling the plug on Section 230 could mean “Game Over” for the user-generated content that brings gamers so much fun.

]]>
https://techliberation.com/2021/06/25/how-a-section-230-repeal-could-mean-game-over-for-the-gaming-community/feed/ 0 76888
Some Recent Essays on the Importance of Innovation & the Fight over Technological Progress https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/ https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/#respond Tue, 28 Jul 2020 15:35:34 +0000 https://techliberation.com/?p=76778

[Updated: March 2022]

I was speaking at a conference recently and discussing my life’s work, which for 30 years has been focused on the importance of innovation and intellectual battles over what we mean progress. I put together up a short list of some things I have written over the last few years on this topic and thought I would just re-post them here. I will try to keep this regularly updated, at least for a few years.

UNDERSTANDING THE CHALLENGE WE FACE:

HOW WE MUST RESPOND = “Rational Optimism” / Right to Earn a Living / Permissionless Innovation

ADDITIONAL READING:

NEW BOOK (tying together all the essays and papers listed above):

 

]]>
https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/feed/ 0 76778
Panicking About 5G is a Celebrity Trend You Shouldn’t Follow https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/ https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/#respond Wed, 13 May 2020 14:00:03 +0000 https://techliberation.com/?p=76728

The COVID-19 pandemic has shown how important technology is for enabling social distancing measures while staying connected to friends, family, school, and work. But for some, including a number of celebrities, it has also heightened fears of emerging technologies that could further improve our connectivity. The latest technopanic should not make us fear technology that has added so much to our lives and that promises to help us even more.

Celebrities such as Keri Hilson, John Cusack, and Woody Harrelson have repeated concerns about 5G—from how it could be weakening our immune systems to even causing this pandemic. These claims about 5G have gotten serious enough that Google banned ads with misleading health information regarding 5G, and Twitter has stated it will remove tweets with 5G and health misinformation that could potentially cause harm in light of the COVID-19 pandemic. 5G is not causing the current pandemic, nor has it been linked to other health concerns. As the director of American Public Health Association Dr. Georges C. Benjamin has stated, “COVID-19 is caused by a virus that came through a natural animal source and has no relation to 5G, or any radiation linked to technology.”  As the New York Times has pointed out, much of the non-COVID-19 5G health concerns originated from Russian propaganda news source RT or trace back to a single decades-old flawed study. In short, there is no evidence to support many of the outrageous health claims regarding 5G.

New technologies have often faced unfounded concerns about their potential risks. In the late 19 th and early 20th centuries, many people feared electricity in the home was making people tired and weak (similar to the health claims about 5G today). More recently, many were concerned that technologies such as microwave ovens and cell phones might cause cancer or other health issues, but studies have proved that these worst fears have little grounding in science.

Some of these fears are based on misunderstandings of how technology works or confusion over similar but distinct technologies. For example, in the case of concerns about cell phones and cancer, the fears may stem from misunderstandings about the differences between ionizing and non-ionizing radiation. In a time of uncertainty, we may want to rush to maintain the status quo. But any number of innovations such as the radio, trains, or cars that were once feared have themselves become part of the status quo.

Why does it matter if some people are afraid of new technologies? While it is completely rational to want to avoid catastrophic and irreversible harms, unfounded fears can risk delaying important and beneficial technologies. For example, work by Linda Simon suggests that the exaggerated claims and fears of electricity’s impact on health may have slowed its adoption. While all technologies carry some risks, can we imagine all that might have been lost if we had listened to those trying to convince us to avoid electricity out of an abundance of caution? we may laugh about fears of electricity and not understanding its benefits, we still see extreme reactions out of fear of new technology, such as recent attempts to burn 5G towers in the United Kingdom because of misinformation about the health risks.

The recent pandemic should remind why constantly improving connectivity and internet infrastructure has been beneficial. As more of us are working from home and have an increased number of connected devices, 5G will increase network capacity and enable faster download speeds. These improvements also play a key role in the development of a number of emerging technologies from smart home devices and virtual reality to driverless cars and remote surgery.

The problem is not in individual choices to avoid a specific technology, but rather how such technopanics can impact broader adoption of beneficial technologies and innovation-friendly public policies. The good news is policymakers recognize the importance of policies that enable 5G and are also informing the public on the facts about wireless technology and health. During the COVID-19 pandemic, the Federal Communications Commission has continued to pursue policies that can improve connectivity, including for advancements toward 5G.

While we may want to follow celebrity trends when it comes to the latest fashion or TikTok dances, we should only let them scare us in the movies and not when it comes to 5G. If we only focus on the most outrageous and unfounded claims, our fear might distract us too much to see its benefits.

]]>
https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/feed/ 0 76728
The Top 10 Most-Read Posts of 2019 https://techliberation.com/2020/01/07/the-top-10-most-read-posts-of-2019/ https://techliberation.com/2020/01/07/the-top-10-most-read-posts-of-2019/#respond Tue, 07 Jan 2020 19:18:54 +0000 https://techliberation.com/?p=76646

Technopanics, Progress Studies, AI, spectrum, and privacy were hot topics at the Technology Liberation Front in the past year. Below are the most popular posts from 2019.

Glancing at our site metrics over the past 10 years, the biggest topics in the 2010s were technopanics, Bitcoin, net neutrality, the sharing economy, and broadband policy. Looking forward at the 2020s, I’ll hazard some predictions about what will be significant debates at the TLF: technopanics and antitrust, AVs, drones, and the future of work. I suspect that technology and federalism will be long-running issues in the next decade, particularly for drones, privacy, AVs, antitrust, and healthcare tech.

Enjoy 2019’s top 10, and Happy New Year.

10. 50 Years of Video Games & Moral Panics by Adam Thierer

I have a confession: I’m 50 years old and still completely in love with video games.

As a child of the 1970s, I straddled the divide between the old and new worlds of gaming. I was (and remain) obsessed with board and card games, which my family played avidly. But then Atari’s home version of “Pong” landed in 1976. The console had rudimentary graphics and controls, and just one game to play, but it was a revelation. After my uncle bought Pong for my cousins, our families and neighbors would gather round his tiny 20-inch television to watch two electronic paddles and a little dot move around the screen.

9. The Limits of AI in Predicting Human Action by Anne Hobson and Walter Stover

Let’s assume for a second that AIs could possess not only all relevant information about an individual, but also that individual’s knowledge. Even if companies somehow could gather this knowledge, it would only be a snapshot at a moment in time. Infinite converging factors can affect one’s next decision to not purchase a soda, even if your past purchase history suggests you will. Maybe you went to the store that day with a stomach ache. Maybe your doctor just warned you about the perils of high fructose corn syrup so you forgo your purchase. Maybe an AI-driven price raise causes you to react by finding an alternative seller.

In other words, when you interact with the market—for instance, going to the store to buy groceries—you are participating in a discovery process about your own preferences or willingness to pay.

8. Free-market spectrum policy and the C Band by Brent Skorup

A few years ago I would have definitely favored speed and the secondary market plan. I still lean towards that approach but I’m a little more on the fence after reading Richard Epstein’s work and others’ about the “public trust doctrine.” This is a traditional governance principle that requires public actors to receive fair value when disposing of public property. It prevents public institutions from giving discounted public property to friends and cronies. Clearly, cronyism isn’t the case here and FCC can’t undo what FCCs did generations ago in giving away spectrum. I think the need for speedy deployment trumps the windfall issue here, but it’s a closer call for me than in the past.

One proposal that hasn’t been contemplated with the C Band but might have merit is an overlay auction with a deadline. With such an auction, the FCC gives incumbent users a deadline to vacate a band (say, 5 years). The FCC then auctions flexible-use licenses in the band. The FCC receives the auction revenues and the winning bidders are allowed to deploy services immediately in the “white spaces” unoccupied by the incumbents. The winning bidders are allowed to pay the incumbents to move out before the deadline.

7. STELAR Expiration Warranted by Hance Haney

The retransmission fees were purposely set low to help the emerging satellite carriers get established in the marketplace when innovation in satellite technology still had a long way to go. Today the carriers are thriving business enterprises, and there is no need for them to continue receiving subsidies. Broadcasters, on the other hand, face unprecedented competition for advertising revenue that historically covered the entire cost of content production.

Today a broadcaster receives 28 cents per subscriber per month when a satellite carrier retransmits their local television signal. But the fair market value of that signal is actually $2.50, according to one estimate.

6. What is Progress Studies? by Adam Thierer

How do we shift cultural and political attitudes about innovation and progress in a more positive direction? Collison and Cowen explicitly state that the goal of Progress Studies transcends “mere comprehension” in that it should also look to “identify effective progress-increasing interventions and the extent to which they are adopted by universities, funding agencies, philanthropists, entrepreneurs, policy makers, and other institutions.”

But fostering social and political attitudes conducive to innovation is really more art than science. Specifically, it is the art of persuasion. Science can help us amass the facts proving the importance of innovation and progress to human improvement. Communicating those facts and ensuring that they infuse culture, institutions, and public policy is more challenging.

5. How Do You Value Data? A Reply To Jaron Lanier’s Op-Ed In The NYT by Will Rinehart

All of this is to say that there is no one single way to estimate the value of data.

As for the Lanier piece, here are some other things to consider:

A market for data already exists. It just doesn’t include a set of participants that Jaron wants to include, which are platform users.    

Will users want to be data entrepreneurs, looking for the best value for their data? Probably not. At best, they will hire an intermediary to do this, which is basically the job of the platforms already.

An underlying assumption is that the value of data is greater than the value advertisers are willing to pay for a slice of your attention. I’m not sure I agree with that.

Finally, how exactly do you write these kinds of laws?

4. Explaining the California Privacy Rights and Enforcement Act of 2020 by Ian Adams

As released, the initiative is equal parts privacy extremism and cynical-politics. Substantively, some will find elements to applaud in the CPREA, between prohibitions on the use of behavioral advertising and reputational risk assessment (all of which are deserving of their own critiques), but the operational structure of the CPREA is nothing short of disastrous. Here are some of the worst bits:

3. Best Practices for Public Policy Analysts by Adam Thierer

So, for whatever it’s worth, here are a few ideas about how to improve your content and your own brand as a public policy analyst. The first list is just some general tips I’ve learned from others after 25 years in the world of public policy. Following that, I have also included a separate set of notes I use for presentations focused specifically on how to prepare effective editorials and legislative testimony. There are many common recommendations on both lists, but I thought I would just post them both here together.

2. An Epic Moral Panic Over Social Media by Adam Thierer

Strangely, many elites, politicians, and parents forget that they, too, were once kids and that their generation was probably also considered hopelessly lost in the “vast wasteland” of whatever the popular technology or content of the day was. The Pessimists Archive podcast has documented dozens of examples of this reoccurring phenomenon. Each generation makes it through the panic du jour, only to turn around and start lambasting newer media or technologies that they worry might be rotting their kids to the core. While these panics come and go, the real danger is that they sometimes result in concrete policy actions that censor content or eliminate choices that the public enjoys. Such regulatory actions can also discourage the emergence of new choices.

1. How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality by Adam Thierer

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however.

]]>
https://techliberation.com/2020/01/07/the-top-10-most-read-posts-of-2019/feed/ 0 76646
Why Apocalyptic Rhetoric Dominates Tech Policy Debates https://techliberation.com/2019/10/02/why-apocalyptic-rhetoric-dominates-tech-policy-debates/ https://techliberation.com/2019/10/02/why-apocalyptic-rhetoric-dominates-tech-policy-debates/#comments Wed, 02 Oct 2019 15:20:32 +0000 https://techliberation.com/?p=76603

The endless apocalyptic rhetoric surrounding Net Neutrality and many other tech policy debates proves there’s no downside to gloom-and-doomism as a rhetorical strategy. Being a techno-Jeremiah nets one enormous media exposure and even when such a person has been shown to be laughably wrong, the press comes back for more. Not only is there is no penalty for hyper-pessimistic punditry, but the press actually furthers the cause of such “fear entrepreneurs” by repeatedly showering them with attention and letting them double-down on their doomsday-ism. Bad news sells, for both the pundit and the press.

But what is most remarkable is that the press continues to label these preachers of the techno-apocalypse as “experts” despite a track record of failed predictions. I suppose it’s because, despite all the failed predictions, they are viewed as thoughtful & well-intentioned. It is another reminder that John Stuart Mill’s 1828 observation still holds true today: “I have observed that not the man who hopes when others despair, but the man who despairs when others hope, is admired by a large class of persons as a sage.”

Additional Reading:

]]>
https://techliberation.com/2019/10/02/why-apocalyptic-rhetoric-dominates-tech-policy-debates/feed/ 1 76603
Black Mirror Episodes from Medieval Times https://techliberation.com/2019/07/02/black-mirror-episodes-from-medieval-times/ https://techliberation.com/2019/07/02/black-mirror-episodes-from-medieval-times/#respond Tue, 02 Jul 2019 18:28:22 +0000 https://techliberation.com/?p=76516

CollegeHumor has created this amazing video, “Black Mirror Episodes from Medieval Times,” which is a fun parody of the relentless dystopianism of the Netflix show “Black Mirror.” If you haven’t watched Black Mirror, I encourage you to do so. It’s both great fun and ridiculously bleak and over-the-top in how it depicts modern or future technology destroying all that is good on God’s green earth.

The CollegeHumor team picks up on that and rewinds the clock about a 1,000 years to imagine how Black Mirror might have played out on a stage during the medieval period. The actors do quick skits showing how books become sentient, plows dig holes to Hell and unleash the devil, crossbows destroy the dexterity of archers, and labor-saving yokes divert people from godly pursuits. As one of the audience members says after watching all the episodes, “technology will truly be the ruin of us all!” That’s generally the message of not only Black Mirror, but the vast majority of modern science fiction writing about technology (and also a huge chunk of popular non-fiction writing, too.)

If you go far enough back in the history of technology and technological criticism, you actually can find plenty of people insisting that the latest and greatest tech of the day would be the ruin of us all. As I noted here before, you can trace tech criticism at least back to Plato’s Phaedrus, which warned about the dangers of the written word. My colleague Tyler Cowen argues you can trace it even further back to the Bible and the Book of Genesis, especially the story of the Tower of Babel.

One can almost imagine how scorn was heaped on the first person to fashion a blade or a wheel out of stone. Before his untimely passing a few years ago, the great Calestous Juma used to occasionally tweet this hilarious cartoon that depicted just that moment in time. The people that carry those “NO” signs are still all around us today. Technopanics and fear cycles just repeat endlessly, as I have noted in dozens of essays and papers through the years.

Image result for cartoon Protesting against technology the early years

 

]]>
https://techliberation.com/2019/07/02/black-mirror-episodes-from-medieval-times/feed/ 0 76516
I (Eye), Robot? https://techliberation.com/2019/05/08/i-eye-robot/ https://techliberation.com/2019/05/08/i-eye-robot/#respond Wed, 08 May 2019 14:24:57 +0000 https://techliberation.com/?p=76482

[Originally published on the Mercatus Bridge blog on May 7, 2019.]

I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.

Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.

If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.

Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.

For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating.

Open Your Eyes to the Wonders of Innovation

In the past, there was very little that could be done about cataracts unless one was willing to undergo extremely dangerous procedures. The oldest type of cataract surgery (“couching”) involved the use of sharp instruments such as thorns and needles to rip the cloudy lens out of the eye. Unsurprisingly, blindness was a common result of this primitive practice. As medical techniques and instruments improved, doctors were able to perform more sophisticated and successful surgeries, albeit still with some risks because human hands were still doing much of the work.

Today, thanks to remarkable advances in medicine, all this is done in a few minutes with the assistance of laser technology. Better yet, patients get to choose exactly what sort of replacement lens they will have installed. I chose “multifocal intraocular” replacement lenses, which let me see near and far equally well.

When you have cataracts in both eyes, they usually perform the surgeries a few weeks apart to make sure one eye comes out alright before getting the other done. Both my outpatient procedures were quick, painless, and remarkably effective. Astonishingly, within 24 hours of having both surgeries, I tested at better than 20/15 vision, which is close to perfect. It was like regaining a lost superpower.

Am I a Cyborg?

My first-hand experience with the miracles of modern medical technology makes me feel even more strongly about what I do for a living. I have spent my life covering emerging technology policy and responding to tech critics, who have a litany of grievances about modern inventions. One common complaint is that today’s technologies are “dehumanizing,” or threaten to turn us all into some sort of cyborgs.

To be sure, my eye surgeries did indeed make me just a little bit less human. After all, I am walking around today with artificial lenses affixed to my eyeballs. Moreover, I previously had eye surgery to correct strabismus, which is basically a form of crossed eyes. Had I remained perfectly “human” or “natural,” I would still be trying to look at the world through two crossed eyes covered with cloudy lenses. No thanks, Mother Nature!

Incidentally, I also have a metal plate and six pins in my ankle from a nasty compound fracture I sustained in the late 1990s. So, my foot isn’t completely “natural” either. But without those implants, I would not likely have walked properly again. Also, due to a combination of bad genes and poor dietary habits, my mouth is full of so many replacement teeth and crowns that I can’t even count them all. Without them, I probably would have needed dentures by age 40, just as my poor grandmother did once her teeth failed her for similar reasons.

Meanwhile, my left knee and right hip have been acting up in recent years, making me wonder if replacements may be needed down the road. Finally, my hearing isn’t so great either after years of abusing my ears at concerts and with speakers played at unhealthy volumes. (Turn down those headphones, kids!) I suspect some sort of hearing supplement awaits me in the future so I can continue to hear properly.

Enhancing Our Humanity

Given the medical procedures I’ve had done or might do, it’s fair to say that the critics are correct: I really am becoming more of a cyborg—part biological, part technological. But what of it? Certainly, my life and the lives of countless other people have been improved thanks to “artificial” improvements to our bodies.

As Joel Garreau noted in his brilliant 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies—And What It Means to Be Human, the history of our species in one of constant improvements to our health and capabilities through technological means. We have augmented our senses and abilities through the use of spectacles, hearing aids, artificial limbs, implants, and various other specialized medicines and treatments. We are living longer, healthier, less painful lives because of it.

Some critics respond by saying that certain “basic” technological improvements to human health are fine, or perhaps should even be subsidized and available to all. One era’s “radical” enhancements become the next generation’s human rights! We have seen that story unfold in the realm of reproductive health, for example. As Jordan Reimschisel and I have documented, in vitro fertilization (IVF) was originally met with hostility in the 1970s, with various authorities objecting to the idea of being able to “play God.” Opposition subsided quickly, however, as public acceptance and demand grew. Today, IVF is often covered by insurance plans.

Still, critics of newer technological capabilities tend to frown upon more sophisticated technological enhancements that could radically enhance our capabilities in ways that supposedly “dehumanize” us. There are always risks associated with new technological capabilities, but through ongoing trial and error experimentation, we find new ways to counter adversity and ailments—and yes, even overcome some of our inherent human limitations. We are not destined to become mindless automatons just because technology enhances our humanity in these ways. Indeed, there is nothing more human than building new and better tools to improve the quality of the lives of people across the globe.

We Can Cope with Change

Critics are fond of falling back on worst-case “technopanic” scenarios ripped from sci-fi novels, movies, and shows to explain how, if we are not careful, we are all just one modification away from creating (or becoming) Frankenstein monsters. We should heed those warnings to some extent, but not to the extent those critics suggest.

There are legitimate ethical issues associated with certain medical treatments and human enhancements. Genetic editing, for example, holds both promise and peril for our species. By modifying our genetic code, we can counter or even defeat debilitating or deadly diseases or ailments before they hobble us or our children. Of course, genetic modification could also be used in unsettling ways by parents or governments to create “designer babies” that have no choice in how their genetic code is altered before birth.

Ethical guidelines, and even some public policies, will need to be crafted and continuously updated to keep pace with these challenges. But, we must not let worst-case thinking determine the future of  all forms of human modification such that the many possible best-case outcomes are discouraged in the process. That would represent a massive setback for the millions of humans, including the unborn ones, who might be threatened by debilitating ailments.

Just as technological innovation gave me (quite literally) a new outlook on the world, so too can it open up new possibilities for countless others. Each day brings inspiring news about how innovation is helping us overcome whatever ails us. The Wall Street Journal reported recently that, “[s]cientists have harnessed artificial intelligence to translate brain signals into speech, in a step toward brain implants that one day could let people with impaired abilities speak their minds.”

More modern miracles like that await us—so long as critics and regulators don’t hold back important innovations in medical technology. In the meantime, thanks to my new cyborg eyes, I have seven old pairs of reading glasses I no longer need, in case anyone wants them.

]]>
https://techliberation.com/2019/05/08/i-eye-robot/feed/ 0 76482
Countering Threats to Innovation with Rational Optimism https://techliberation.com/2019/04/29/countering-threats-to-innovation-with-rational-optimism/ https://techliberation.com/2019/04/29/countering-threats-to-innovation-with-rational-optimism/#respond Mon, 29 Apr 2019 20:30:02 +0000 https://techliberation.com/?p=76478

Over at the American Institute for Economic Research blog, I recently posted two new essays discussing increasing threats to innovation and discussing how to counter them. The first is on “The Radicalization of Modern Tech Criticism,” and the second discusses, “How To Defend a Culture of Innovation During the Technopanic.”

“Technology critics have always been with us, and they have sometimes helped temper society’s occasional irrational exuberance about certain innovations,” I note in the opening of the first essay. The problem is that the “technology critics sometimes go much too far and overlook the importance of finding new and better ways of satisfying both basic and complex human needs and wants.” I continue on to highlight the growing “technopanic” rhetoric we sometimes hear today, including various claims that “it’s OK to be a Luddite” and push for a “degrowth movement” that would slow the wheels of progress. That would be a disaster for humanity because, as I note in concluding that first essay:

Through ongoing trial-and-error tool building, we discover new and better ways of satisfying human needs and wants to better our lives and the lives of those around us. Human flourishing is dependent upon our collective willingness to embrace and defend the creativity, risk-taking, and experimentation that produces the wisdom and growth that propel us forward. By contrast, today’s neo-Luddite tech critics suggest that we should just be content with the tools of the past and slow down the pace of technological innovation to supposedly save us from any number of dystopian futures they predict. If they succeed, it will leave us in a true dystopia that will foreclose the entrepreneurialism and innovation opportunities that are paramount to raising the standard of living for billions of people across the world.

In the second essay, I make an attempt to sketch out a more robust vision and set of principles to counter the tech critics. Building on my last book, as well as a forthcoming one, I outline a sort of “rational-optimist creed.” This vision is inspired by the important work of Matt Ridley and his excellent book, The Rational Optimist: How Prosperity Evolves. Generally speaking, rational optimists:

  • believe there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment, but also acknowledge the various challenges sometimes associated with technological change;
  • look forward to a better future and reject overly nostalgic accounts of some supposed “good ‘ol days” or bygone better eras;
  • base our optimism on facts and historical analysis, not on blind faith in any particular viewpoint, ideology, or gut feeling;
  • support practical, bottom-up solutions to hard problems through ongoing trial-and-error experimentation, but are not wedded to any one process to get the job done;
  • appreciate entrepreneurs for their willingness to take risks and try new things, but do not engage in hero worship of any particular individual, organization, or particular technology.

Going further, I build on the excellent work of Robert D. Atkinson, founder and president of the Information Technology and Innovation Foundation, who in his 2005 book, The Past and Future of America’s Economy, identified the way “a political divide is emerging between preservationists who want to hold onto the past and modernizers who recognize that new times require new means.” I tried to provide a breakdown for how this conflict of visions plays out in various ways:

I also highlight some of my favorite works by other rational optimists, including, Steven Pinker ( Enlightenment Now), Deirdre McCloskey (Bourgeois Equality), Calestous Juma (Innovation and Its Enemies), Samuel Florman (The Existential Pleasures of Engineering), and Virginia Postrel (The Future and Its Enemies), and Joel Mokyr, (The Lever of Riches: Technological Creativity and Economic Progress).

I encourage you to jump over to the AIER blog and read both essays in full.

 

]]>
https://techliberation.com/2019/04/29/countering-threats-to-innovation-with-rational-optimism/feed/ 0 76478
On Isolation & Inattention Panics https://techliberation.com/2018/11/26/on-isolation-inattention-panics/ https://techliberation.com/2018/11/26/on-isolation-inattention-panics/#respond Mon, 26 Nov 2018 21:33:31 +0000 https://techliberation.com/?p=76414

Last week, science writer Michael Shermer tweeted out this old xkcd comic strip that I had somehow missed before. Shermer noted that it represented, “another reply to pessimists bemoaning modern technologies as soul-crushing and isolating.” Similarly, there’s this meme that has been making the rounds on Twitter and which jokes about how newspapers made us as antisocial in the past much as newer technologies supposedly do today.

‏The sentiments expressed by the comic and that image make it clear how people often tend to romanticize past technologies or fail to remember that many people expressed the same fears about them as critics do today about newer ones. I’ve written dozens of articles about “moral panics” and “techno-panics,” most of which are cataloged here. The common theme of those essays is that, when it comes to fears about innovations, there really is nothing new under the sun. Academics, social critics, religious leaders, politicians and even average parents tend to panic over the same problems time and time again. The only thing that changes is the particular medium or technology that is the object of their collective ire.

Isolation and inattention panics are some of the most common “fear cycles” that we have seen repeatedly play out through the ages. Indeed, sociologist Frank Furedi reminds us that panics over isolation, distraction, or inattention have been quite common. Consistent with that xkcd comic, Furedi has documented how “inattention has served as a sublimated focus for apprehensions about moral authority” going back to at least the early 1700s and continuing on through the next two centuries. During those years, he notes:

Inattention was increasingly perceived as an obstacle to the socialisation of young people. Countering the habit of inattention among children and young people became the central concern of pedagogy in the 18th century […]  During the 19th century, the state of inattention became thoroughly moralised. Inattentiveness was perceived as a threat to industrial progress, scientific advance and prosperity.

Today, however, the panic over inattention has ramped up, Furedi argues:

Unlike in the 18th century when it was perceived as abnormal, today inattention is often presented as the normal state. The current era is frequently characterised as the Age of Distraction, and inattention is no longer depicted as a condition that afflicts a few. Nowadays, the erosion of humanity’s capacity for attention is portrayed as an existential problem, linked with the allegedly corrosive effects of digitally driven streams of information relentlessly flowing our way.

While I generally agree these panics are overblown, one must also admit that there is some degree of truth to  all of them in the sense that each new technology presents us with some added level of potential distraction. And today we have more of those potential distractions than ever before. So, something’s gotta give, right?

“What information consumes is rather obvious,” Nobel Prize-winning economist and psychologist Herbert Simon remarked in 1971: “the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” Almost a half century later, we are confronted with a “wealth of information” that Simon could not have imagined, and that’s what has many critics worried about the potentially socially-destructive consequences of new technologies.

But social critics who write about this supposed “poverty of attention” problem have taken matters to the extreme and concocted some entertaining rhetorical ploys in an attempt to one-up each other on the panic meter. In a 2005 book, I discussed dozens of colorful book and article titles and terms like: “information overload;” “cognitive overload;” “information anxiety;” “information fatigue syndrome;” “information paralysis;” “techno-stress;” “information pollution;” “data smog;” and even “data asphyxiation.”

And that was all pre-Facebook and pre-Twitter! A dozen years later, this isolation-is-killing-us theme is becoming even more prevalent in books and articles. There are far to many books of this ilk to list here, but a quick sampling of the most popular ones would include: Nick Carr ( The Shallows), Franklin Foer (World Without Mind), Maggie Jackson (Distracted), Sherry Turkle (Alone Together), Eli Pariser (The Filter Bubble), John Freeman (The Tyranny of E-Mail), and Cass Sunstein (Republic.com), among many others. I have an entire bookshelf in my office filled with nothing but books of this variety, all penned over just the past 20 years.

Perhaps the sheer volume of panicky tracts suggests that there must be something to these fears. Let’s be clear: isolation, distraction, or inattention  are problems. But to some extent, these are problems that have always been with us and are not going away any time soon.

Social critics and cranky intellectuals love to complain about new technologies, and that’s never going to end. The best of that criticism will incorporate practical strategies for living a better life and suggest steps for how we all can find a better balance with the technologies that dominate our lives–today, tomorrow, and on into the future.

Sadly, most critics take a different approach which implicitly suggests we have somehow departed a golden age of living and that only a dystopian hellscape awaits us from here on out (if we’re not already living in it). It’s utter poppycock. As I’ve written before, pastoral myths and public square fantasies about some supposedly glorious but no-lost “good old days” are a lot of fun right up until you realize that the old days were, in fact, eras of abject misery. By almost every meaningful metric, we are better today than we were in the past, and that is probably just as true for things that we don’t have metrics for, including “attentiveness” or “distractability.”

We’d all like to think that people–especially kids–were somehow more attentive, more social, and more civil in the past than they are in today’s seemingly more cluttered, cacophonous, hurly-burly modern era. But there is absolutely no concrete evidence suggesting that is true and, as Furedi shows, there exists plenty of anecdotal evidence that when it comes to inattention, things really haven’t changed that much at all. We can and should strive to do better and find constructive solutions to problems such as these, but we should not go overboard with rhetorical threat inflation about the nature or severity of this problem. Nor should we pursue impractical or highly destructive solutions that would undermine the many other benefits associated with our new technological capabilities.

Ironically, at their very worst, isolation or inattention panics accomplish the exact opposite of what some social critics suggest that they desire. The critics often claim that they are just looking out for the next generation and trying to chart a better path for them. In reality, however, those critics are often just engaging in the same sort of fear-mongering and youth-shaming that countless other generations have before with their “KIDS THESE DAYS!” complaints. It’s always easy for intellectuals to tap into the worst fears of parents and policymakers by suggesting that the younger generation has lost the ability to reason or communicate effectively. And yet, each generation somehow figures out how to muddle through. We are an imperfect species, but we are also a highly resilient one.

Of course, that won’t stop an entirely new generation of critics from panicking about whatever future technology is apparently distracting the next generation to death. Fear sells and panics get attention. The calmer truths that history teaches us take longer to appreciate.

Bill Maudlin, Life magazine, Jan. 1950

 


Additional Reading:

 

]]>
https://techliberation.com/2018/11/26/on-isolation-inattention-panics/feed/ 0 76414
Some data on wireless networks and cancer rates https://techliberation.com/2018/11/06/some-data-on-wireless-networks-and-cancer-rates/ https://techliberation.com/2018/11/06/some-data-on-wireless-networks-and-cancer-rates/#comments Tue, 06 Nov 2018 18:33:17 +0000 https://techliberation.com/?p=76401

By Brent Skorup and Trace Mitchell

An important benefit of 5G cellular technology is more bandwidth and more reliable wireless services. This means carriers can offer more niche services, like smart glasses for the blind and remote assistance for autonomous vehicles. A Vox article last week explored an issue familiar to technology experts: will millions of new 5G transmitters and devices increase cancer risk? It’s an important question but, in short, we’re not losing sleep over it.

5G differs from previous generations of cellular technology in that “densification” is important–putting smaller transmitters throughout neighborhoods. This densification process means that cities must regularly approve operators’ plans to upgrade infrastructure and install devices on public rights-of-way. However, some homeowners and activists are resisting 5G deployment because they fear more transmitters will lead to more radiation and cancer. (Under federal law, the FCC has safety requirements for emitters like cell towers and 5G. Therefore, state and local regulators are not allowed to make permitting decisions based on what they or their constituents believe are the effects of wireless emissions.)

We aren’t public health experts; however, we are technology researchers and decided to explore the telecom data to see if there is a relationship. If radio transmissions increase cancer, we should expect to see a correlation between the number of cellular transmitters and cancer rates. Presumably there is a cumulative effect: the more cellular radiation people are exposed to, the higher the cancer rates.

From what we can tell, there is no link between cellular systems and cancer. Despite a huge increase in the number of transmitters in the US since 2000, the nervous system cancer rate hasn’t budged.  In the US the number of wireless transmitters have increased massively–300%–in 15 years. (This is on the conservative side–there are tens of millions of WiFi devices that are also transmitting but are not counted here.)

But the US cancer rate is the dog that didn’t bark. In that same span of time, the type of cancers you would expect if cellphones pose a cancer risk–brain and nervous systems–have remained flat. If anything, as the NIH has said, these cancer rates have fallen slightly.

It’s a seeming paradox: In the US there was an introduction of 300,000 fairly powerful cell transmitters and hundreds of millions of (lower-power) devices that transmit signals through the air twenty four hours per day, seven days per week, every day of the year, yet these transmissions have no apparent effect on cancer rates.

The fear of 4G and 5G transmitters is due to a common misunderstanding about radiation. Significant exposure to ionizing radiation , the kind put off by X-rays and ultraviolet light, does have the potential to cause cancer. However, as the Vox article and other experts point out, cellular systems and devices don’t put off ionizing radiation. Tech devices emit a form of non-ionizing radiation , the type of radiation you receive from the visible light that bounces off, say, a book you hold in your hand. Unlike ionizing radiation, this non-ionizing radiation is too weak to alter DNA.

More research would be welcomed. The Vox article notes that much of the wireless system-cancer research is low-quality. Further, while wireless systems don’t seem to cause DNA damage there may be other effects on cells. A very focused wireless transmission from inches away can excite molecules and raise the temperature–this is how a microwave oven works–so it might be a good idea to keep your cellphone on your desk, not in your pocket, when possible. In the end, however, resist the technopanic–we don’t see much to be concerned about.

]]>
https://techliberation.com/2018/11/06/some-data-on-wireless-networks-and-cancer-rates/feed/ 1 76401
Event Video: My Talk at Reboot 2018 about “Innovation Under Threat” https://techliberation.com/2018/10/25/event-video-my-talk-at-reboot-2018-about-innovation-under-threat/ https://techliberation.com/2018/10/25/event-video-my-talk-at-reboot-2018-about-innovation-under-threat/#respond Thu, 25 Oct 2018 20:50:51 +0000 https://techliberation.com/?p=76397

Last month, it was my great honor to be invited to be a keynote speaker at Lincoln Network’s Reboot 2018 “Innovation Under Threat” conference. Zach Graves interviewed me for 30 minutes about a wide range of topics, including: innovation arbitrage, evasive entrepreneurialism, technopanics, the pacing problem, permissionless innovation, technological civil disobedience, existential risk, soft law and more. They’ve now posted the full event video and you can watch it down below.

]]>
https://techliberation.com/2018/10/25/event-video-my-talk-at-reboot-2018-about-innovation-under-threat/feed/ 0 76397
The Pacing Problem and the Future of Technology Regulation https://techliberation.com/2018/08/10/the-pacing-problem-and-the-future-of-technology-regulation/ https://techliberation.com/2018/08/10/the-pacing-problem-and-the-future-of-technology-regulation/#respond Fri, 10 Aug 2018 12:48:10 +0000 https://techliberation.com/?p=76342

[first published at The Bridge on August 9, 2018]

What happens when technological innovation outpaces the ability of laws and regulations to keep up?

This phenomenon is known as “the pacing problem,” and it has profound ramifications for the governance of emerging technologies. Indeed, the pacing problem is becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.

The Innovation Cornucopia

Had Rip Van Winkle woken up his famous nap today, he’d be shocked by all the changes around him. At-home genetics tests, personal drones, driverless cars, lab-grown meats, and 3D-printed prosthetic limbs are just some of the amazing innovations that would boggle his mind. New devices and services are flying at us so rapidly that we sometimes forget that most did not even exist a short time ago. At this point, it feels like our smartphones have been in our lives forever, but even just a decade ago, very few of us had one. Likewise, plenty of people now regularly enjoy the benefits of the sharing economy, but ten years ago, Uber, Lyft, and Airbnb did not even exist. Most of the social networking platforms or online video and audio streaming services that we use today had not even been created 15 years ago. Back then, Netflix’s DVD mail subscription service seemed downright revolutionary.

With every innovation comes more questions about how the law should keep pace, or whether it even can. “There has always been a pacing problem,” observes Yale University bioethicist Wendell Wallach, author of  A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. But what Wallach and many other scholars worry about today is that the pace of change has been kicked into overdrive, making it more difficult than ever for traditional legal schemes and regulatory mechanisms to stay relevant. Larry Downes refers to this as “The Law of Disruption.” In his 2009 book on this “law,” Downes showed how “technology changes exponentially, but social, economic, and legal systems change incrementally” and that this law was becoming “a simple but unavoidable principle of modern life.”

Moore’s Law Quickens the Pace

There are three primary reasons the pacing problem is such a force in our modern world. The root cause lies in the power of “combinatorial innovation,” which is driven by “Moore’s Law.”  The Information Revolution spawned a stunning array of new technological capabilities that build on top of one another in a symbiotic fashion. Think about the shared foundational elements of most modern inventions: microchips, sensors, digital code, big data, cloud computing, remote data storage, wireless networking and geolocation capabilities, machine-learning, cryptography, and more. Each of these underlying capabilities is becoming faster, cheaper, smaller, more powerful, and easier to find and use. Innovators are combining them as part of their ongoing search for new and better ways of doing things.

Moore’s Law powers these developments. Moore’s Law is the principle named after Intel co-founder Gordon E. Moore, who first observed in 1965 that “computing would dramatically increase in power, and decrease in relative cost, at an exponential pace” in coming years. Indeed, it has continued to do so for the past half century for many information technologies. A recent Technology Policy Institute white paper noted that “data transit prices fell from about $1200 per Mbps in 1998 to $0.02 per Mbps in 2017.”

These forces are now revolutionizing other sectors as “software eats the world” and innovators utilize these new technologies to address nearly every conceivable need and want. In the field of genetics, the biological equivalent of Moore’s Law is known as the “Carlson curve.” The past two decades have seen the cost of sequencing a human genome fall from over $100 million to under $1,000, a rate nearly three times faster than Moore’s Law.

What the Public Wants, the Public Gets

The second reason the pacing problem is accelerating is that the public wants it to! It is true that many people say they are uneasy with many emerging technologies. When new gadgets and services first gain attention, a “technopanic” attitude often ensues. That is unsurprising because, as others have noted, “fear has gone hand in hand with technological advancements throughout history.”

But societal attitudes toward technological change often shift rapidly. They do so even faster today as citizens quickly assimilate new tools into their daily lives and then expect that even more and better tools will be delivered tomorrow. As more people begin to realize how new technologies improve our lives in meaningful ways, it becomes extremely hard for policymakers to take those innovations away or even tell us not to expect better ones. This relationship between technological change and societal expectations acts as an extraordinarily powerful check on the ability of regulators to “roll back the clock” on innovative activities.

Broken Government Exacerbates the Problem

Finally, the pacing problem is becoming more acute because “demosclerosis” and “kludgeocracy” have taken hold within American government. Jonathan Rauch coined the term demosclerosis in his 1999 book Government’s End: Why Washington Stopped Working to describe “government’s progressive loss of the ability to adapt.” “[A]s layer is dropped upon layer,” he argued, “the accumulated mass becomes gradually less rational and less flexible.”

Instead of cleaning up old legalistic messes and adapting to the times, government solutions are more often clumsily cobbled together to patch past problems and create temporary solutions. Steven Teles refers to this as kludgeocracy. “The complexity and incoherence of our government often make it difficult for us to understand just what that government is doing,” Teles says. Kludgeocracy creates serious costs for individual citizens, governments themselves, and to our democratic systems more generally, he argues. Taken together, demosclerosis and kludgeocracy breed highly dysfunctional governments and make it even easier for the pacing problem to speed ahead.

Can Policymakers Adapt?

Regulators are not oblivious to the challenges posed by the pacing problem. “I have said more than once that innovation moves at the speed of imagination and that government has traditionally moved at, well, the speed of government,” remarked Michael Heurta, head of the Federal Aviation Administration, in a 2016 speech regarding drones. Shortly after Huerta made those comments, the Department of Transportation released a report on the regulation of driverless car technology which noted that “The speed with which [driverless cars] are advancing, combined with the complexity and novelty of these innovations, threatens to outpace the Agency’s conventional regulatory processes and capabilities.”

Food and Drug Administration (FDA) regulators have increasingly referenced the pacing problem when discussing the challenge of keeping up with new medical innovations.  The New York Times recently asked Dr. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, how the agency planned to deal with hundreds of “rogue” stem cell treatment clinics. “There are hundreds and hundreds of these clinics,” he said. “We simply don’t have the bandwidth to go after all of them at once.”

The pacing problem has even crept into antitrust enforcement. The US Department of Justice (DOJ) sought to break up Microsoft in the late 1990s, but as the legal proceedings dragged on through the early 2000’s, the market moved and made the DOJ’s case moot. Google Chrome and Mozilla Firefox emerged as legitimate competitors to Microsoft’s Internet Explorer without regulatory remedy. In the end, Microsoft reached a settlement with the DOJ that fell far short of the government’s original ambitions to bust up the firm, all because the market moved at a pace much faster than the regulator’s pace. More recent antitrust action in the US and EU also suffer from the pacing problem. Multi-year antitrust investigations reach conclusions that don’t reflect market trends in the intervening years and offer remedies that may be “too little, too late,” especially in the information technology sector.

Is the Pacing Problem Really the Pacing Benefit?

What should policymakers do in light of these new challenges? The extremes will not work. Lawmakers or regulators cannot simply double-down on the lethargic and unwieldy technocratic regulatory schemes of the past. Command-and-control tactics are not going to be effective in an age when technology evolves in a quicksilver fashion. In a world where “innovation arbitrage” is easier than ever, repressive crackdowns on new tech will often backfire. Evasive entrepreneurs will often move to those jurisdictions where their innovative acts are treated more hospitably. That, too, exacerbates the pacing problem.

From the perspective of many innovation advocates, this will make it seem like the pacing problem is more like the pacing  benefit. Generally speaking, that intuition is sound. Innovation is the fundamental driver of human betterment. We need more “moonshots”—“radical but feasible solutions to important problems”—to ensure that current and future generations enjoy more choices, greater mobility, increased wealth, better health, and longer lifespans. We don’t want archaic regulatory schemes and regimes holding that back.

Constructive Solutions

But policymakers will not abandon oversight of emerging technologies altogether, nor should we want them to. The potential harms associated with some new technologies could be significant enough that a certain degree of regulatory oversight will be required. But the pacing problem means the old, inflexible, top-down approaches will need to be discarded and that the administrative state itself must become more entrepreneurial.

In a forthcoming law review article entitled, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future,” Jennifer Skees, Ryan Hagemann, and I discuss how “soft law” mechanisms—multi-stakeholder processes, industry best practices and standards, workshops, agency guidance, and more—can help fill the governance gap as the pacing problem accelerates. Many agencies are already tapping soft law tools to help guide the development of new technologies such as driverless cars, drones, the Internet of Things, mobile medical applications, artificial intelligence, and others. In fact, we argue that soft law has already become the dominant form of technological governance for emerging tech in the US.

Critics might decry soft law as either being too lax (and open to private abuse) or too informal (and open to government abuse), but the pacing problem makes both arguments increasingly irrelevant. We need a new governance vision for the technological age. Our new governance systems must be more flexible and adaptive than the heavy-handed regulatory regimes that preceded them.

___________________

Related Reading

]]>
https://techliberation.com/2018/08/10/the-pacing-problem-and-the-future-of-technology-regulation/feed/ 0 76342
FCC’s Ajit Pai on Importance of Permissionless Innovation Vision https://techliberation.com/2018/08/07/fccs-ajit-pai-on-importance-of-permissionless-innovation-vision/ https://techliberation.com/2018/08/07/fccs-ajit-pai-on-importance-of-permissionless-innovation-vision/#respond Tue, 07 Aug 2018 17:34:21 +0000 https://techliberation.com/?p=76338

FCC Chairman Ajit Pai recently delivered an excellent speech at the Resurgent Conference, Austin, TX. In it, he stressed the importance of adopting a permissionless innovation policy vision to ensure a bright future for technology, economic growth, and consumer welfare. The whole thing is worth your time, but the last two paragraphs make two essential points worth highlighting.

Pai correctly notes that we should reject the sort of knee-jerk hysteria or technopanic mentality that sometimes accompanies new technologies. Instead, we should have some patience and humility in the face of uncertainty and be open to new ideas and technologies creations.

“Here’s the bottom line,” Pai concludes:

Whenever a technological innovation creates uncertainty, some will always have the knee-jerk reaction to presume it’s bad. They’ll demand that we do whatever’s necessary to maintain the status quo. Strangle it with a study. Call for a commission. Bemoan those supposedly left behind. Stipulate absolute certainty. Regulate new services with the paradigms of old. But we should resist that temptation. “Guilty until proven innocent” is not a recipe for innovation, and it doesn’t make consumers better off. History tells us that it is not preemptive regulation, but permissionless innovation made possible by competitive free markets that best guarantees consumer welfare. A future enabled by the next generation of technology can be bright, if only we choose to let the light in.

Read the whole thing here. Good stuff. I also appreciate him citing my work on the topic, which you can find in my last book and other publications.

]]>
https://techliberation.com/2018/08/07/fccs-ajit-pai-on-importance-of-permissionless-innovation-vision/feed/ 0 76338
Autonomous Vehicles Aren’t Just Driverless Cars: 5 Thoughts About the Future of Autonomous Buses https://techliberation.com/2018/02/13/autonomous-vehicles-arent-just-driverless-cars-5-thoughts-about-the-future-of-autonomous-buses/ https://techliberation.com/2018/02/13/autonomous-vehicles-arent-just-driverless-cars-5-thoughts-about-the-future-of-autonomous-buses/#respond Tue, 13 Feb 2018 16:19:22 +0000 https://techliberation.com/?p=76234

Autonomous cars have been discussed rather thoroughly recently and at this point it seems a question of when and how rather than if they will become standard. But as this issue starts to settle, new questions about the application of autonomous technology to other types of transportation are becoming ripe for policy debates. While a great deal of attention seems to be focused on the potential revolutionize the trucking and shipping industries, not as much attention has been paid to how automation may help improve both intercity and intracity bus travel or other public and private transit like trains. The recent requests for comment from the Federal Transit Authority show that policymakers are starting to consider these other modes of transit in preparing for their next recommendations for autonomous vehicles. Here are 5 issues that will need to be considered for an autonomous transit system.

  1. Establish what agency or sub-agency if any has the authority to regulate or guide development of autonomous buses.

Currently, the National Highway Traffic Safety Administration (NHTSA) has provided the most thorough guidance on autonomous vehicles, but it has focused almost exclusively on privately owned, individual transport rather than buses or trucking. Currently buses are regulated by the Federal Transit Administration (FTA), NHTSA, the Federal Motor Carrier Safety Administration, Transportation Security Administration (TSA), and various agencies depending on what particular regulation is being addressed. With the growth of soft law particularly for autonomous vehicles, this overlapping jurisdiction becomes even more complicated for those hoping to start a new driverless bus system.

For example, an innovator hoping to start a driverless bus system from Washington, DC to New York City could have approval from NHTSA for the vehicles safety standards from an informal sandboxing, but find him or herself fighting the TSA after the system was ready due to the intercity travel or state regs in either location. This overlapping jurisdiction at the federal level results in further delay for innovators who may think they have properly consulted necessary agencies or are not required to seek approval.

While evasive entrepreneurs have been able to work within and around such regulations, other times they have had to engage in innovation arbitrage in order to continue such projects or stop development before its fully realized. Yes, Elon Musk might be willing to flip switch on Hyperloop with a verbal yes, but other innovators and investors are less likely to pursue costly projects that regularly face regulatory rejection.

 

  1. Vertical Take Off and Landing (VTOL) may be more transformational than autonomous buses

It’s possible that at some point multi-passenger Vertical Take Off and Landing (VTOL) may actually be more disruptive and take the place of standard buses. These devices are basically drones that can carry human passengers.

Uber, for example, has already announced its plans to test such technology in the relatively near future. Just like we may skip some levels of automation on particular technologies, we may find that we are better off skipping autonomous buses in favor of other technology altogether.

 

  1. State and local governments also have significant impact on buses and that’s not necessarily bad.

Right now a great deal of regulation of both autonomous vehicles and intracity transit is done at a state or local level through restrictions on operations, noise control, and local sanctioned-monopolies. Some of this is because of the increasing difficulty in creating formalized rules or legislation to address disruptive technology at a pace sufficient to keep up with innovation. As Adam Thierer, Ryan Hagemann, and I discuss in our forthcoming paper, this has led to an increased use of soft law at a federal level. It has also opened a window for state and local governments to try new policy solutions to determine what (if any) form of regulation might be best to encourage a disruptive technology like autonomous vehicles. While some economists might argue that every new government is a barrier to efficiency, allowing such local regulations is not in and of itself bad.

If the federal government were to become the new bus czar, it would not likely end well. Not only would cities and states protest the usurpation of their traditional role, but they would lack the local knowledge to determine which tradeoffs to make. Transit would best serve its citizens when they are making the decisions that most directly impact them. While the future may have less strict routes, schedules, and stops through services like Lyft Shuttle even these will require some knowledge of local needs to determine the hours and areas for the most profitable operation.

At the same time, there are real risks that a few powerful cities or states like California or New York City could prevent life-saving innovations like autonomous transit in smaller markets. This could be in a variety of ways from permitting to lane restrictions to funding. Still when examined as a national or even international market, it is likely that innovators would choose to take their technology elsewhere to a market that did exist. For example, following increased regulations related to autonomous vehicle testing in California, Uber moved its autonomous vehicle testing to a more welcome regulatory environment in Arizona. While engaging in such innovation arbitrage is not as easy for an entire transit system, states and cities that are more welcoming or at least willing to work with technological disruptors are more likely to see innovators flock to those areas as well as tangential benefits of allowing such new technology.

 

  1. Smart cities v. dumb choices

In general, it should be applauded that many states and cities are trying to take proactive actions to prepare for potentially transformative changes of driverless cars. However, many of these actions are dumb choices that neither prepare for the change nor promote innovation. As Emily Hamilton has written, “Self-interested incentives may lead policymakers to implement new technologies without making real changes in the quality of service delivery.”

Some of the investment in technological infrastructure has its benefits such as providing data to make infrastructure decision and increased safety and connectedness by enabling more direct communication with citizens. At the same time, many of these projects have been little more than novelties and suffer from the same cronyism issues as other government funded projects. With autonomous vehicles, cities and states may risk betting on the wrong horse and investing in technology that will later be incompatible with the most common product on the market. As Michael and Emily Hamilton have written with the gap between the proposal of legislation and its actual implementation it is easy for the “smart” technologies to be outdated by the time they actually reach citizens.

Still, there are general policy changes that can prepare cities for a smart future. Adam Thierer has written about three policy proposals (an Innovator’s Presumption, a Sunsetting Imperative, and a Parity Provision) that would enable policymakers to create cities that embraced innovation. These proposals rather than targeting specific technologies would create a regulatory environment that encourages experimentation and innovation in a variety of industries.

 

  1. Concerns about the impact of autonomous buses are well intentioned, but typically more about incumbents maintaining their market share.

As Michael Farren and I wrote about the collective freakout in Oregon about having to pump your own gases, often technopanics overlap with an imbedded cronyism or incumbents trying to keep out new entrants through bootleggers and Baptists phenomena.

Sadly, this phenomena is starting emerge when discussions about autonomous buses appear. Unions in some cities like Columbus, Ohio have public voiced their opposition if the jobs of current operators would be impacted. While job loss is a sad event, new technologies do not merely appear overnight and bring with them new job opportunities. Attempts by unions and other advocates to prevent any potential job losses from autonomous vehicles, could cost hundreds of thousands of lives including those of bus and truck drivers. Delaying a life-saving technology because it may negatively impact a few when it could benefit a large number in most cases is not a desirable tradeoff. Policymakers and advocates must realize that there will always be tradeoffs and recognize that often a small loss is necessary for a larger gain.

Technology does not just destroy jobs, it also creates them. A 2015 Deloitte study found that in the 140 years since the industrial revolution new technology had created more jobs than it had destroyed and not just in areas directly related to the technology. As individuals had more free time because technology made things like agriculture and manufacturing easier, significant growth was experienced not only in jobs related directly to technology, but also service and creative industries. While cars may have unemployed blacksmiths, they provided new opportunities for many others by creating and expanding new industries. It is likely that the current disruptive technologies will do the same.

 

 

As both the technology and policy surrounding autonomous vehicles evolves these and many other issues will have to be discussed and decided. It is a welcome event that such conversations are beginning to embrace the broader applications of the technology rather than solely focusing on “driverless cars” and hopefully this expanded focus will allow for even greater innovation and benefits.

]]>
https://techliberation.com/2018/02/13/autonomous-vehicles-arent-just-driverless-cars-5-thoughts-about-the-future-of-autonomous-buses/feed/ 0 76234
The Top 10 Tech Liberation Posts in 2017 https://techliberation.com/2018/01/02/the-top-tech-liberation-posts-in-2017/ https://techliberation.com/2018/01/02/the-top-tech-liberation-posts-in-2017/#comments Tue, 02 Jan 2018 20:54:44 +0000 https://techliberation.com/?p=76216

Technology policy has made major inroads into a growing number of fields in recent years, including health care, labor, and transportation, and we at the Technology Liberation Front have brought a free-market lens to these issues for over a decade. As is our annual tradition, below are the most popular posts* from the past year, as well as key excerpts.

Enjoy, and Happy New Year.

10. Thoughts on “Demand” for Unlicensed Spectrum

Unlicensed spectrum is a contentious issue because the FCC gives out this valuable spectrum for free to device companies. The No. 10 most-read piece in 2017 was my January commentary on the proposed Mobile Now Act. In particular, I was alarmed at some of the vague language encouraging unlicensed spectrum.

Note that we have language about supply and demand here [in the bill]. But unlicensed spectrum is free to all users using an approved device (that is, nearly everyone in the US). Quantity demanded will always outstrip quantity supplied when a valuable asset (like spectrum or real estate) is handed out when price = 0. By removing a valuable asset from the price system, large allocation distortions are likely. Any policy originating from Congress or the FCC to satisfy “demand” for unlicensed spectrum biases the agency towards parceling out an excessive amount of unlicensed spectrum.

9. The FCC’s Misguided Paid Priority Ban

Net neutrality has been generating clicks for over a decade and there was plenty of net neutrality news in 2017. In April, I explained why regulating and banning “paid priority” agreements online is damaging to the Internet.

The notion that there’s a level playing field online needing preservation is a fantasy. Non-real-time services like Netflix streaming, YouTube, Facebook pages, and major websites can mostly be “cached” on servers scattered around the US. Major web companies have their own form of paid prioritization–they spend millions annually, including large payments to ISPs, on transit agreements, CDNs, and interconnection in order to avoid congested Internet links. The problem with a blanket paid priority ban is that it biases the evolution of the Internet in favor of these cache-able services and against real-time or interactive services like teleconferencing, live TV, and gaming. Caching doesn’t work for these services because there’s nothing to cache beforehand.

Happily, a few months after this post was published the Trump FCC, led by Chairman Pai, eliminated the intrusive 2015 Internet regulations, including the “paid priority ban.”

8. Who needs a telecom regulator? Denmark doesn’t.

In March, the Mercatus Center published a case study by Roslyn Layton, a Trump transition team member, and Joe Kane about Denmark’s successful telecom reform since the 1990s. I summarized the paper for readers after it was published.

Layton and Kane explore Denmark’s relatively free-market telecom policies. They explain how Denmark modernized its telecom laws over time as technology and competition evolved. Critically, the center-left government eliminated Denmark’s telecom regulator in 2011 in light of the “convergence” of services to the Internet. Scholars noted, “Nobody seemed to care much—except for the staff who needed to move to other authorities and a few people especially interested in IT and telecom regulation.” Even-handed, light telecom regulation performs pretty well. Denmark, along with South Korea, leads the world in terms of broadband access. The country also has a modest universal service program that depends primarily on the market. Further, similar to other Nordic countries, Denmark permitted a voluntary forum, including consumer groups, ISPs, and Google, to determine best practices and resolve “net neutrality” controversies.

This fascinating Layton-Kane case study inspired a November event in DC about the future of US telecom law featuring FCC Chairman Ajit Pai and former Danish regulator Jakob Willer.

7. Shouldn’t the Robots Have Eaten All the Jobs at Amazon By Now?

Artificial intelligence and robotics are advancing rapidly but no one is certain what the effects will be for American labor markets. In July, Adam looked at Amazon’s incorporation of robots and urged scholars and policymakers to resist the doomsayers who predict crushing unemployment.

The reality is that we suffer from a serious poverty of imagination when it comes to thinking about the future, and future job opportunities in particular. …Old jobs and skills are indeed often replaced by mechanization and new technological processes. But that in turn opens the door to people to take on new opportunities — often in new sectors and new firms, but sometimes even within the same industries and companies. And because human needs and wants are essentially infinite, this process just goes on and on and on as we search for new and better ways of doing things. And that’s how, in the long run, robots and automation are actually employment-enhancing rather than employment-reducing.

6. Does “Permissionless Innovation” Even Mean Anything?

Adam spoke at an Arizona State University conference in May about emerging technologies and published his remarks at Tech Liberation. He commented on the rise of “soft law” for government oversight of tech-infused, fast-moving industries.

That is, there seemed to be some grudging acceptance on both our parts that “soft law” systems, multistakeholder processes, and various other informal governance mechanisms will need to fill the governance gap left by the gradual erosion of hard law. Many other scholars, including many of you in this room, have discussed the growth of soft law mechanisms in specific contexts, but I believe we have probably failed to acknowledge the extent to which these informal governance models have already become the dominant form of technological governance, at least in the United States.

5. Book Review: Garry Kasparov’s “Deep Thinking”

In May, Adam reviewed Garry Kasparov’s new book about AI, describing it as a “welcome breath of fresh air” in a genre often devoted to generating technopanics.

Kasparov’s book serves as the perfect antidote to the prevailing gloom-and-doom narrative in modern writing about artificial intelligence (AI) and smart machines. His message is one of hope and rational optimism about future in which we won’t be racing against the machines but rather running alongside them and benefiting in the process. …Kasparov suggests that there are lessons for us in the history of chess as well as from his own experience competing against Deep Blue. He notes that his match against IBM’s supercomputer, “was symbolic of how we are in a strange competition both with and against our creation in more ways every day.” Instead of just throwing our hands up in the air in frustration, we must be willing to embrace the new and unknown — especially AI and machine-learning.

4. Remember What the Experts Said about the Apple iPhone 10 Years Ago?

2017 marked the ten-year anniversary of the release of the first iPhone. Adam took a look back at some of the predictions made when the groundbreaking device first hit stores.

A decade after these predictions were made, Motorola, Nokia, Palm, and Blackberry have been decimated by the rise of Apple as well as Google (which actually purchased Motorola in the midst of it all). And Microsoft still struggles with mobile even though they are still a player in the field. Rarely have Joseph Schumpeter’s “perennial gales of creative destruction” blown harder than they have in the mobile sector over this 10 year period.

3. 4 Ways Technology Helped During Hurricanes Harvey and Irma (and 1 more it could have)

Jennifer Huddleston Skees joined our team in 2017 and September wrote the No. 3 most-popular post of the year about how technology is aiding disaster relief.

Technology is changing the way we respond to disasters and assisting with relief efforts. As Allison Griswold writes at Quartz, this technology enabled response has redefined how people provide assistance in the wake of disaster. We cannot plan how such technology will react to difficult situations or the actions of such platforms users, but the recent events in Florida and Texas show it can enable us to help one another even more. The more technology is allowed to participate in a response, the better it enables people to connect to those in need in the wake of disaster.

2. Some background on broadband privacy changes

Hyperbole, misinformation, and worse is amplified in too many news stories and Facebook feeds whenever Republicans undo an Obama FCC priority. Early in 2017 Congress and President Trump decided to use the rarely-used Congressional Review Act process to repeal broad Internet privacy regulations passed by the Obama FCC in 2016. My explainer about what was really going on (No, ISPs are not selling your SSNs and location information without your permission.) was the No. 2 story of the year.

Considering that these notice and choice rules have not even gone into effect, the rehearsed outrage from advocates demands explanation: The theatrics this week are not really about congressional repeal of the (inoperative) privacy rules. Two years ago the FCC decided to regulate the Internet in order to shape Internet services and content. The leading advocates are outraged because FCC control of the Internet is slipping away. Hopefully Congress and the FCC will eliminate the rest of the Title II baggage this year.
  1. Here’s why the Obama FCC Internet regulations don’t protect net neutrality

There are plenty of myths about the 2015 “net neutrality” Order. Fortunately, many people out there are skeptical of the conventional narrative surrounding net neutrality. My post from July about the paper-thin net neutrality protections in the 2015 Order saw new life in November and December when the Trump FCC released a proposal to repeal the 2015 Order. Driven by the theatrics by those opposing the December 2017 Restoring Internet Freedom Order (and a Mark Cuban retweet), this post came from behind to be the most-read Technology Liberation post of the year.

The 2016 court decision upholding the rules was a Pyrrhic victory for the net neutrality movement. In short, the decision revealed that the 2015 Open Internet Order provides no meaningful net neutrality protections–it allows ISPs to block and throttle content. As the judges who upheld the Order said, “The Order…specifies that an ISP remains ‘free to offer ‘edited’ services’ without becoming subject to the rule’s requirements.”

No one knows what 2018 has in store for technology policy, but your loyal TLF bloggers are preparing for driverless car technology, cybersecurity, spectrum policy, and more.

Stay tuned, and thanks for reading.

 

*Excepting the most-read post, which was a 2017 update to a 2014 post from Adam about the definition of technology.

]]>
https://techliberation.com/2018/01/02/the-top-tech-liberation-posts-in-2017/feed/ 1 76216
How to Sell a Book about Tech Policy: Turn the Technopanic Dial Up to 11 https://techliberation.com/2018/01/02/how-to-sell-a-book-about-tech-policy-turn-the-technopanic-dial-up-to-11/ https://techliberation.com/2018/01/02/how-to-sell-a-book-about-tech-policy-turn-the-technopanic-dial-up-to-11/#respond Tue, 02 Jan 2018 16:34:22 +0000 https://techliberation.com/?p=76220

Reason magazine recently published my review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. My review begins as follows:

If you want to sell a book about tech policy these days, there’s an easy formula to follow. First you need a villain. Google and Facebook should suffice, but if you can throw in Apple, Amazon, or Twitter, that’s even better. Paint their CEOs as either James Bond baddies bent on world domination or naive do-gooders obsessed with the quixotic promise of innovation. Finally, come up with a juicy Chicken Little title. Maybe something like World Without Mind: The Existential Threat of Big Tech. Wait—that one’s taken. It’s the title of Franklin Foer’s latest book, which follows this familiar techno-panic template almost perfectly.

The book doesn’t break a lot of new ground; it serves up the same old technopanicky tales of gloom-and-doom that many others have said will befall us unless  something is done to save us. But Foer’s unique contribution is to unify many diverse strands of modern tech criticism in one tome, and then amp up the volume of panic about it all. Hence, the “existential” threat in the book’s title. I bet you didn’t know the End Times were so near!

Read the rest of my review over at Reason. And, if you care to read some of my other essays on technopanics through the ages, here’s a compendium of them.

]]>
https://techliberation.com/2018/01/02/how-to-sell-a-book-about-tech-policy-turn-the-technopanic-dial-up-to-11/feed/ 0 76220
Technological Mad Libs: How the Common Law Evolves to Embrace Disruptive Technology Despite Legal Technopanic https://techliberation.com/2017/08/07/technological-mad-libs-how-the-common-law-evolves-to-embrace-disruptive-technology-despite-legal-technopanic/ https://techliberation.com/2017/08/07/technological-mad-libs-how-the-common-law-evolves-to-embrace-disruptive-technology-despite-legal-technopanic/#respond Mon, 07 Aug 2017 19:42:02 +0000 https://techliberation.com/?p=76171

“First electricity, now telephones. Sometimes I feel as if I were living in an H.G. Wells novel.” –Dowager Countess, Downton Abbey

Every technology we take for granted was once new, different, disruptive, and often ridiculed and resisted as a result. Electricity, telephones, trains, and television all caused widespread fears once in the way robots, artificial intelligence, and the internet of things do today. Typically it is realized by most that these fears are misplaced and overly pessimistic, the technology gets diffused and we can barely remember our life without it. But in the recent technopanics, there has been a concern that the legal system is not properly equipped to handle the possible harms or concerns from these new technologies. As a result, there are often calls to regulate or rein in their use.

In the late 1980s, video cassette recorders (VCRs) caused a legal technopanic. The concerns were less that VCRs would lead to some bizarre human mutation as in many technopanics, but rather that the existing system of copyright infringement and vicarious liability could not adequately address the potential harm to the motion picture industry. The then president of the Motion Picture Association of America Jack Valenti famously told Congress, “I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone.”

In the eyes of the film and television producers the legal system did not have the resources to protect their copyright or hold the manufacturers and distributors of these disruptive machines properly liable for their actions. The Ninth Circuit initially sided with the producers finding that recording of television programs for home-viewing was not part of a blanket fair use exception in copyright law and that the manufacturers and distributors of VCRs could be held vicariously liable for their actions. This was overturned at the Supreme Court by a single vote.

By denying the movie industry a victory, ironically, the courts actually handed them a much bigger one. By allowing for the widespread adoption of this technology, the courts actually provided a new line of profit for the studios in home video sales and did not cripple copyright law in the process. It also though shows that the process by and large works. Individuals who pirate or distributed copyrighted video material (remember the FBI warnings at the start of tapes) could still be held personally liable for their violations. If Congress had intervened, the actions would likely have been too broad or too narrow to give appropriate remedy as common law did. Similar concerns arise today with new creative techniques such as 3D printing, but typically it is best to at least let the common law attempt to address these concerns before deeming it incapable. This illustrates how liability norms can evolve naturally over time to strike a sensible balance.

This legal technopanic also emerged around the Internet. The global and anonymous nature of the Internet naturally make it more difficult to perceive the potential harms and to identify the perpetrators and gain jurisdiction over them. Or so the legal technopanic goes. Judge Easterbrook explained in his 1996 article Cyberspace and the Law of the Horse, “the law applicable to specialized endeavors is to study general rules.” Intellectual property law and property rights more generally are relatively well defined general rules. The beauty of the common law is its ability to adapt to a specific situation. Still there are concerns which may require interventions to be made. When necessary these interventions are especially important because, as John Villasenor wrote, “While technology is usually described as an enabler … liability is often described as an impediment.”

For example, Congress preemptively limited the liability of internet service providers in Section 230 of the Communications Decency Act. While there always seem to be concerns over this immunity when bad things happen on the internet, by and large the courts have been able to determine when the ISP was actively contributing to the violations of state and federal laws. In fact the protection provided by Section 230 merely codified the same principles at common law which lead to the protection of the VCR.

A little protection via legislation was necessary to allow the internet to flourish, but that protection was needed in part because of a legal technopanic. Similarly, Congress intervened to establish a notice-and-takedown procedure through the Digital Millennium Copyright Act (DMCA), when it became apparent that existing copyright law was not evolving as quickly as technology to address both the internet host and the copyright holders concerns. While ideally the common law would have been allowed to evolve to a conclusion on the issue, the sudden rise of YouTube and other online services necessitated at least a temporary intervention. Such legislation represents a compromise that likely would have resulted in a winner or loser if it had played out in the courts. As a result, while the common law is typically preferable sometimes legislation is necessary to at least temporarily establish a norm and stem the prevention of innovation from a possible legal technopanic.

By and large the courts have adapted disruptive technology as quickly or even more so then society, and as a result allowed the common law to see reason. Perhaps the moral of the story is as Edward Coke wrote in 1642, “The common law itself is nothing else but reason.”

]]>
https://techliberation.com/2017/08/07/technological-mad-libs-how-the-common-law-evolves-to-embrace-disruptive-technology-despite-legal-technopanic/feed/ 0 76171
What a 1911 Silent Movie Tells Us about the Technopanic Mentality https://techliberation.com/2017/06/21/what-a-1911-silent-movie-tells-us-about-the-technopanic-mentality/ https://techliberation.com/2017/06/21/what-a-1911-silent-movie-tells-us-about-the-technopanic-mentality/#comments Wed, 21 Jun 2017 20:36:35 +0000 https://techliberation.com/?p=76148

I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!

In his wonderful 2013 book,  Smarter Than You Think: How Technology Is Changing Our Minds for the BetterClive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”

Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.

Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! (Ironically, it was a tweet by Clive Thompson that brought this clip to my attention.) The short film is called The Automatic Motorist and here’s how Michael Waters summarizes the plot in a post over at Atlas Obscura: “In it, a robot chauffeur is developed to drive a newly wedded couple to their honeymoon destination. But this robot malfunctions, and all of a sudden the couple is marooned in outer space (and then sinking underwater, and then flying through the sky—it’s complicated).” In sum: don’t trust robots or autonomous systems or you will probably die.

Regardless of how silly the plot sounds or the film looks, what I really found interesting about it was the way that they film jumped right into the classic sci-fi dystopian scenario of ROBOTS GONE WILD. Countless other books, stories, movies, and TV shows would follow that same predictable plot line in subsequent decades. In one sense, it’s entirely logical why authors and screenwriters do this. Simply put, bad news sells, and that is especially true when the bad news is delivered in the form of robotic systems running amok and threatening the future of humanity.

But I wonder… did the creators of The Automatic Motorist ever consider the far more risky scenario surrounding automobiles? Specifically, isn’t it a shame that they didn’t foresee the millions upon millions of deaths that would occur due to human error behind the wheel?

The tale of automation-gone-wrong always makes for better box office and book sales, but fear-mongering about technologies can condition people (and policymakers) to think in fearful terms about those products and systems. Robotic cars would have been impossible in 1911, obviously, so perhaps this concern seems meaningless in this context. But it is indicative of the bigger problem of the technopanic crowd focusing on hypothetical worst-case scenarios and avoiding the more mundane — but ultimately far more concerning — real-world risks that might occur in the absence of ongoing technological innovation.

And in many ways this is still the debate we are having in 2017 as the discussion about robotic “driverless” cars has finally ripened. We stand on the brink of what may become one of the great public health success stories of our lifetime. With the roadway death toll climbing for the first time in decades (around 40,000 deaths last year; or over 100 people dying on the roads every day), and with 94 percent of accidents being attributable to human error, those facts alone should constitute the most powerful reason to give autonomous technology a chance to prove itself. If policymakers fail to do so, it could result in countless potential injuries and deaths that driverless cars probably could have prevented.

These “unseen” unintended consequences of misguided policies constitute as sort of hidden tax on humanity’s future. When the technopanic crowd that tells us we must live in fear of each and every new innovation, they are creating the riskiest future scenario of them all: one that is stagnant and backwards-looking. The burden of proof is on them to explain why we should be denied the benefits that accompany ongoing trial and error experimentation with new and better ways of doing things that could ensure us a safer and more prosperous future.

]]>
https://techliberation.com/2017/06/21/what-a-1911-silent-movie-tells-us-about-the-technopanic-mentality/feed/ 1 76148