“The world should think better about catastrophic and existential risks.” So says a new feature essay in The Economist. Indeed it should, and that includes existential risks associated with emerging technologies.
The primary focus of my research these days revolves around broad-based governance trends for emerging technologies. In particular, I have spent the last few years attempting to better understand how and why “soft law” techniques have been tapped to fill governance gaps. As I noted in this recent post compiling my recent writing on the topic;
soft law refers to informal, collaborative, and constantly evolving governance mechanisms that differ from hard law in that they lack the same degree of enforceability. Soft law builds upon and operates in the shadow of hard law. But soft law lacks the same degree of formality that hard law possess. Despite many shortcomings and criticisms, compared with hard law, soft law can be more rapidly and flexibly adapted to suit new circumstances and address complex technological governance challenges. This is why many regulatory agencies are tapping soft law methods to address shortcomings in the traditional hard law governance systems.
I argued in recent law review articles as well as my latest book, despite its imperfections, I believe that soft law has an important role to play in filling governance gaps that hard law struggles to address. But there are some instances where soft law simply will not cut it. Continue reading →
Does anyone remember Blockbuster and Hollywood Video? I assume most of you do, but wow, doesn’t it seem like forever ago when we actually had to drive to stores to get movies to watch at home? What a drag that was!
Yet, just 15 years ago, that was the norm and those two firms were the titans of video distribution, so much so that federal regulators at the Federal Trade Commission looked to stop their hegemony through antitrust intervention. But then those firms and whatever “market power” they possessed quickly evaporated as a wave of Schumpeterian creative destruction swept through video distribution markets. Both those firms and antitrust regulators had completely failed to anticipate the tsunami of technological and marketplace changes about to hit in the form of alternative online video distribution platforms as well as the rise of smartphones and robust nationwide mobile networks.
Today, this serves as a cautionary tale of what happens when regulatory hubris triumphs over policy humility, as Trace Mitchell and I explain in this new essay for National Review Online entitled, “The Crystal Ball of Antitrust Regulators Is Cracked.” As we note:
There is no discernable end point to the process of entrepreneurial-driven change. In fact, it seems to be proliferating rapidly. To survive, even the most successful companies must be willing to quickly dispense with yesterday’s successful business plans, lest they be steamrolled by the relentless pace of technological change and ever-shifting consumer demands. It is easy to understand why some people find it hard to imagine a time when Amazon, Apple, Facebook, and Google won’t be quite as dominant as they are today. But it was equally challenging 20 years ago to imagine that those same companies could disrupt the giants of that era.
Hopefully today’s policymakers will have a little more patience and trust competition and continued technological innovation to bring us still more wonderful video choices.
Margaret Talbot has written an excellent New Yorker essay entitled, “The Rogue Experimenters,” which documents the growth of the D.I.Y.-bio movement. This refers to the organic, bottom-up, citizen science movement, or “leaderless do-ocracy” of tinkerers, as she notes. I highly recommend you check it out.
As I noted in my new book on Evasive Entrepreneurs and the Future of Governance, “DIY health services and medical devices are on the rise thanks to the combined power of open-source software, 3D printers, cloud computing, and digital platforms that allow information sharing between individuals with specific health needs. Average citizens are using these new technologies to modify their bodies and abilities, often beyond the confines of the law.”
Talbot discusses many of the same examples I discuss in my book, including:
the Four Thieves Vinegar collective, which devised instructions for building its own version of the EpiPen;
e-nable, an international collective of thirty thousand volunteers, designs and 3-D-prints prosthetic hands and arms (and which has, more recently, distributed more than fifty thousand face shields in more than twenty-five countries.);
GenSpace and other community biohacking labs; and
Open Insulin and Open Artificial Pancreas System.
I like the way Talbot compares these movements to the hacker and start-up culture of the Digital Revolution: Continue reading →
I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.
Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.
If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.
Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.
For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating. Continue reading →
Why should we really care about technological innovation? My Mercatus Center colleague James Broughel and I have just published a paper answering that question. In “Technological Innovation and Economic Growth: A Brief Report on the Evidence,” we summarize the extensive body of evidence that discusses the relationship between innovation, growth, and human prosperity. We note that while economists, political scientists, and historians don’t agree on much, there exists widespread consensus among them that there is a symbiotic relationship between the pace of innovation and the progress of civilization. Our 27-page paper documenting the academic evidence on this issue can be downloaded on SSRN or from the Mercatus website. Here’s the abstract:
Technological innovation is a fundamental driver of economic growth and human progress. Yet some critics want to deny the vast benefits that innovation has bestowed and continues to bestow on mankind. To inform policy discussions and address the technology critics’ concerns, this paper summarizes relevant literature documenting the impact of technological innovation on economic growth and, more broadly, on living standards and human well-being. The historical record is unambiguous regarding how ongoing innovation has improved the way we live; however, the short-term disruptive aspects of technological change are real and deserve attention as well. The paper concludes with an extended discussion about the relevance of these findings for shaping cultural attitudes toward technology and the role that public policy can play in fostering innovation, growth, and ongoing improvements in the quality of life of citizens.
This week I will be traveling to Montreal to participate in the 2018 G7 Multistakeholder Conference on Artificial Intelligence. This conference follows the G7’s recent Ministerial Meeting on “Preparing for the Jobs of the Future” and will also build upon the G7 Innovation Ministers’ Statement on Artificial Intelligence. The goal of Thursday’s conference is to, “focus on how to enable environments that foster societal trust and the responsible adoption of AI, and build upon a common vision of human-centric AI.” About 150 participants selected by G7 partners are expected to participate, and I was invited to attend as a U.S. expert, which is a great honor.
I look forward to hearing and learning from other experts and policymakers who are attending this week’s conference. I’ve been spending a lot of time thinking about the future of AI policy in recent books, working papers, essays, and debates. My most recent essay concerning a vision for the future of AI policy was co-authored with Andrea O’Sullivan and it appeared as part of a point/counterpoint debate in the latest edition of the Communications of the ACM. The ACM is the Association for Computing Machinery, the world’s largest computing society, which “brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges.” The latest edition of the magazine features about a dozen different essays on “Designing Emotionally Sentient Agents” and the future of AI and machine-learning more generally.
In our portion of the debate in the new issue, Andrea and I argue that “Regulators Should Allow the Greatest Space for AI Innovation.” “While AI-enabled technologies can pose some risks that should be taken seriously,” we note, “it is important that public policy not freeze the development of life-enriching innovations in this space based on speculative fears of an uncertain future.” We contrast two different policy worldviews — the precautionary principle versus permissionless innovation — and argue that:
artificial intelligence technologies should largely be governed by a policy regime of permissionless innovation so that humanity can best extract all of the opportunities and benefits they promise. A precautionary approach could, alternatively, rob us of these life-saving benefits and leave us all much worse off.
That’s not to say that AI won’t pose some serious policy challenges for us going forward that deserve serious attention. Rather, we are warning against the dangers of allowing worst-case thinking to be the default position in these discussions. Continue reading →
The ongoing ride-sharing wars in New York City are interesting to watch because they signal the potential move by state and local officials to use infrastructure management as an indirect form of innovation control or competition suppression. It is getting harder for state and local officials to defend barriers to entry and innovation using traditional regulatory rationales and methods, which are usually little more than a front for cronyist protectionism schemes. Now that the public has increasingly enjoyed new choices and better services in this and other fields thanks to technological innovation, it is very hard to convince citizens they would be better off without more of the same.
If, however, policymakers claim that they are limiting entry or innovation based on concerns about how disruptive actors supposedly negatively affect local infrastructure (in the form of traffic or sidewalk congestion, aesthetic nuisance, deteriorating infrastructure, etc.), that narrative can perhaps make it easier to sell the resulting regulations to the public or, more importantly, the courts. Going forward, I suspect that this will become a commonly-used playbook for many state and local officials looking to limit the reach of new technologies, including ride-sharing companies, electric scooters, driverless cars, drones, and many others.
To be clear, infrastructure control is both (a) a legitimate state and local prerogative; and (b) something that has been used in the past to control innovation and entry in other sectors. But I suspect that this approach is about to become far more prevalent because a full-frontal defense of barriers to innovation is far more likely to face serious public and legal challenges. Continue reading →
By Andrea O’Sullivan and Adam Thierer (First published at The Bridgeon August 1, 2018.)
Technology is changing the ways that entrepreneurs interact with, and increasingly get away from, existing government regulations. The ongoing legal battles surrounding 3D-printed weapons provide yet another timely example.
For years, a consortium of techies called Defense Distributed has sought to secure more protections for gun owners by making the code allowing someone to print their own guns available online. Rather than taking their fight to Capitol Hill and spending billions of dollars lobbying in potentially fruitless pursuits of marginal legislative victories, Defense Distributed ties their fortunes to the mast of technological determinism and blurs the lines between regulated physical reality and the open world of cyberspace.
The federal government moved fast, with gun control advocates like Senator Chuck Schumer (D-NY) and former Representative Steve Israel (D-NY) proposing legislation to criminalize Defense Distributed’s activities. They failed.
Plan B in the efforts to quash these acts of 3D-printing disobedience was to classify the Computer-aided design (CAD) files that Defense Distributed posted online as a kind of internationally-controlled munition. The US State Department engaged in a years-long legal brawl over whether or not Defense Distributed violated established International Traffic in Arms Regulations (ITAR). The group pulled down the files while the issue was examined in court, but the code had long since been uploaded to sharing sites like The Pirate Bay. The files have also been available on the Internet Archive for many years. The CAD, if you will excuse the pun, is out of the bag.
In a surprising move, the Department of Justice suddenly moved to drop the suit and settle with Defense Distributed last month. It agreed to cover the group’s legal fees and cease its attempt to regulate code already easily accessible online. While no legal precedent was set, since this was merely a settlement, it is likely that the government realized that its case would be unwinnable.
Gun control advocates did not react well to this legal retreat. Continue reading →
There was horrible news from Tempe, Arizona this week as a pedestrian was struck and killed by a driverless car owned by Uber. This is the first fatality of its type and is drawing widespread media attention as a result. According to both police statements and Uber itself, the investigation into the accident is ongoing and Uber is assisting in the investigation. While this certainly is a tragic event, we cannot let it cost us the life-saving potential of autonomous vehicles.
While any fatal traffic accident involving a driverless car is certainly sad, we can’t ignore the fact that each and every day in the United States letting human beings drive on public roads is proving far more dangerous. This single event has led some critics to wonder why we were allowing driverless cars to be tested on public roads at all before they have been proven to be 100% safe. Driverless cars can help reverse a public health disaster decades in the making, but only if policymakers allow real-world experimentation to continue.
Let’s be more concrete about this: Each day, Americans take 1.1 billion trips driving 11 billion miles in vehicles that weigh on average between 1.5 and 2 tons. Sadly, about 100 people die and over 6,000 are injured each day in car accidents. 94% of these accidents have been shown to be attributable to human error and this deadly trend has been increasing as we become more distracted while driving. Moreover, according to the Center for Disease Control and Prevention, almost 6000 pedestrians were killed in traffic accidents in 2016, which means there was roughly one crash-related pedestrian death every 1.6 hours. In Arizona, the issue is even more pronounced with the state ranked 6th worst for pedestrians and the Phoenix area ranked the 16th worst metro for such accidents nationally. Continue reading →
We hear a lot these days about “technological moonshots.” It’s an interesting phrase because the meaning of both words in it are often left undefined. I won’t belabor the point about how people define–or, rather, fail to define–“technology” when they use it. I’ve already spent a lot of time writing about that problem. See, for example, this constantly updated essay here about “Defining ‘Technology.'” It’s a compendium I began curating years ago that collects what dozens of others have had to say on the matter. I’m always struck by how many different definitions are out there that I keep unearthing.
The term “moonshots” has a similar problem. The first meaning is the literal one that hearkens back to President Kennedy’s famous 1962 “we choose to go to the moon” speech. That use of the terms implies large government programs and agencies, centralized control, and top-down planning with a very specific political objective in mind. Increasingly, however, the term “moonshot” is used more generally, as I note in this new Mercatus essay about “Making the World Safe for More Moonshots.” My Mercatus Center colleague Donald Boudreaux has referred to moonshots as, “radical but feasible solutions to important problems,” and Mike Cushing of Enterprise Innovation defines a moonshot as an “innovation that achieves the previously unthinkable.” I like that more generic use of the term and think it could be used appropriately when discussing the big innovations many of us hope to see in fields as diverse as quantum computing, genetic editing, AI and autonomous systems, supersonic transport, and much more. I still have some reservations about the term, but I think it’s definitely a better term than “disruptive innovation,” which is also used differently by various scholars and pundits.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →