Search Results for “google” – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 09 Aug 2023 19:31:34 +0000 en-US hourly 1 6772528 Good FAA Update on State and Local Rules for Drone Airspace https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/ https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/#comments Mon, 07 Aug 2023 14:36:02 +0000 https://techliberation.com/?p=77147

There’s been exciting progress in US drone policy in the past few months. First, the FAA in April announced surprising new guidance in its Aeronautics Information Manual, re: drone airspace access. As I noted in an article for the State Aviation Journal, the new Manual notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

That April update has been followed up by a bigger, drone policy update from FAA. On July 14, the FAA went further than the April guidance and updated and replaced its 2015 guidance to states and localities about drone regulation and airspace policy.

In this July 2023 guidance, I was pleasantly surprised to see the FAA recognize some state and local authority in the “immediate reaches” airspace. Notably, in the new guidance the FAA expressly notes that that state laws that “prohibit [or] restrict . . . operations by UAS in the immediate reaches of property” are an example of laws not subject to conflict preemption.

A handful of legal scholars–like ASU Law Professor Troy Rule and myself–have urged federal officials for years to recognize that states, localities, and landowners have a significant say in what happens in very low-altitude airspace–the “immediate reaches” above land. That’s because the US Supreme Court in US v. Causby recognized that the “immediate reaches” above land is real property owned by the landowner:

[I]t is obvious that, if the landowner is to have full enjoyment of the land, he must have exclusive control of the immediate reaches of the enveloping atmosphere. …As we have said, the flight of airplanes, which skim the surface but do not touch it, is as much an appropriation of the use of the land as a more conventional entry upon it.

Prior to these recent updates, the FAA’s position on which rules apply in very low-altitude airspace–FAA rules or state property rules–was confusing. The agency informally asserts authority to regulate drone operations down to “the grass tips”; however, many landowners don’t want drones to enter the airspace immediately above their land without permission and would sue to protect their property rights. This is not a purely academic concern: the uncertainty about whether and when drones can fly in very low-altitude airspace has created damaging uncertainty for the industry. As the Government Accountability Office told Congress in 2020:

The legal uncertainty surrounding these [low-altitude airspace] issues is presenting challenges to integration of UAS [unmanned aircraft systems] into the national airspace system.


With this July update, the FAA helps clarify matters. To my knowledge, this is the first mention of “immediate reaches,” and implicit reference to Causby, by the FAA. The update helpfully protects, in my view, property rights and federalism. It also represents a win for the drone industry, which finally has some federal clarity on this after a decade of uncertainty about how low they can fly. Drone operators now know they can sometimes be subject to local rules about aerial trespass. States and cities now know that they can create certain, limited prohibitions, which will be helpful to protect sensitive locations like neighborhoods, stadiums, prisons, and state parks and conservation areas.

As an aside: It seems possible one motivation for the FAA adding this language is to foreclose future takings litigation (a la Cedar Point Nursery v. Hassid) against the FAA. With this new guidance, the FAA can now point out in future takings litigation that they do not authorize drone operations in the immediate reaches of airspace; this FAA guidance indicates that operations in the immediate reaches is largely a question of state property and trespass laws.

On the whole, I think this new FAA guidance is strong, especially the first formal FAA recognition of some state authority over the “immediate reaches.” That said, as a USDOT Inspector General report to Congress pointed out last year, the FAA has not been responsive when state officials have questions about creating drone rules to complement federal rules. In 2018, for instance, a lead State “participant [in an FAA drone program] requested a clarification as to whether particular State laws regarding UAS conflicted with Federal regulations. According to FAA, as of February 2022 . . . FAA has not yet provided an opinion in response to that request.”

Four years-plus of silence from the FAA is a long time for a state official to wait, and it’s a lifetime for a drone startup looking for legal clarity. I do worry about agency non-answers on preemption questions from states, and how other provisions in this new guidance will be interpreted. Hopefully this new guidance means FAA employees can be more responsive to inquiries from state officials. With the April and July airspace policy updates, the FAA, state aviation offices, the drone industry, and local officials are in a better position to create commercial drone networks nationwide, while protecting the property and privacy expectations of residents.

Further Reading

See my July report on drones and airspace policy for state officials, including state rankings: “2023 State Drone Commerce Rankings: How prepared is your state for drone commerce?”.

]]>
https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/feed/ 8 77147
Studies Document Growing Cost of EU Privacy Regulations https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/ https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/#respond Thu, 09 Feb 2023 16:22:47 +0000 https://techliberation.com/?p=77086

[Originally published on Medium on 2/5/2022]

In an earlier essay, I explored “Why the Future of AI Will Not Be Invented in Europe” and argued that, “there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it.” This essay summarizes some of the major academic literature that leads to that conclusion.

Since the mid-1990s, the European Union has been layering on highly restrictive policies governing online data collection and use. The most significant of the E.U.’s recent mandates is the 2018 General Data Protection Regulation (GDPR). This regulation established even more stringent rules related to the protection of personal data, the movement thereof, and limits what organizations can do with data. Data minimization is the major priority of this system, but there are many different types of restrictions and reporting requirements involved in the regulatory scheme. This policy framework also has ramifications for the future of next-generation technologies, especially artificial intelligence and machine learning systems, which rely on high-quality data sets to improve their efficacy.

Whether or not the E.U.’s complicated regulatory regime has actually resulted in truly meaningful privacy protections for European citizens relative to people in other countries remains open to debate. It is very difficult to measure and compare highly subjective values like privacy across countries and cultures. This makes benefit-cost analysis for privacy regulation extremely challenging — especially on the benefits side of the equation.

What is no longer up for debate, however, is the cost side of the equation and the question of what sort of consequences the GDPR has had on business formation, competition, investment, and so on. On these matters, standardized metrics exist and the economic evidence is abundantly clear: the GDPR has been a disaster for Europe.

Summary of Major Studies on Impact of EU Data Regulation

Consider the impact of E.U. data controls on business startups and market structure. GDPR and other regulations greatly limit the flow of data to innovative upstarts who need it most to compete, leaving only the largest companies who can afford to comply to control most of the market. Benjamin Mueller of ITIF notes that it is already the case that just “two of the world’s 30 largest technology firms by market capitalization are from the EU,” and only “5 of the 100 most promising AI startups are based in Europe,” while private funding of AI startups in Europe for 2020 ($4 billion) was dwarfed by US ($36 billion) and China ($25 billion). These issues are even more pressing as the E.U. looks to advance a new AI Act, which would layer on still more regulatory restrictions.

In concrete terms, this has meant that the E.U. came away from the digital revolution with “the complete absence of superstar companies,” argue competition policy experts Nicolas Petit and David Teece. There are no European versions of Microsoft, Google, or Apple, even though Europeans clearly demand the sort of products and services those US-based companies provide. Entrepreneurialism scholar Zoltan Acs asks: “What has been the outcome of E.U. policy in limiting entrepreneurial activity over recent decades?” His conclusion:

It is immediately clear… that the United States and China dominate the platform landscape. Based on the market value of top companies, the United States alone represents 66% of the world’s platform economy with 41 of the top 100 companies. European platform-based companies play a marginal role, with only 3% of market value.

Several recent studies have documented the costs associated with the GDPR and the E.U.’s heavy-handed approach to data flows more generally. Here is a rundown of some of the academic evidence and a summary of the major findings from these studies.

“There is a growing body of economic literature and commentary showing that the costs of implementing the GDPR benefit large online platforms, and that consent-based data collection gives a competitive advantage to firms offering a range of consumer-facing products compared to smaller market actors. This in turn increases concentration in a number of digital markets where access to data is important, by creating barriers to entry or encouraging market exit.” (p. 2–3)

“this paper examines how privacy regulation shaped firm performance in a large sample of companies across 61 countries and 34 industries. Controlling for firm and country-industry-year unobserved characteristics, we compare the outcomes of firms at different levels of exposure to EU markets, before and after the enforcement of the GDPR in 2018. We find that enhanced data protection had the unintended consequence of reducing the financial performance of companies targeting European consumers. Across our full sample, firms exposed to the regulation experienced a 8% decline in profits, and a 2% reduction in sales. An exception is large technology companies, which were relatively unaffected by the regulation on both performance measures. Meanwhile, we find the negative impact on profits among small technology companies to be almost double the average effect across our full sample. Following several robustness tests and placebo regressions, we conclude that the GDPR has had significant negative impacts on firm performance in general, and on small companies in particular.” (p. 1)

“We show that websites’ vendor use falls after the European Union’s General Data Protection Regulation (GDPR), but that market concentration also increases among technology vendors that provide support services to websites. We collect panel data on the web technology vendors selected by more than 27,000 top websites internationally. The week after the GDPR’s enforcement, website use of web technology vendors falls by 15% for EU residents. Websites are more likely to drop smaller vendors, which increases the relative concentration of the vendor market by 17%. Increased concentration predominantly arises among vendors that use personal data such as cookies, and from the increased relative shares of Facebook and Google-owned vendors, but not from website consent requests. Though the aggregate changes in vendor use and vendor concentration dissipate by the end of 2018, we find that the GDPR impact persists in the advertising vendor category most scrutinized by regulators. Our findings shed light on potential explanations for the sudden drop and subsequent rebound in vendor usage.” (p. 1)

GDPR creates inherent tradeoffs between data protection and other dimensions of welfare, including competition and innovation. While some of these effects were acknowledged when constructing the legal data regime, many were disregarded. Furthermore, the magnitude and breadth of such effects may well constitute an unintended and unheeded welfare-reducing consequence. As this article shows, the GDPR limits competition and increases concentration in data and data-related markets, and potentially strengthens large data controllers. It also further reinforces the already existing barriers to data sharing in the EU, thereby potentially reducing data synergies that might result from combining different datasets controlled by separate entities.” (pp. 3–4)

“Using data on 4.1 million apps at the Google Play Store from 2016 to 2019, we document that GDPR induced the exit of about a third of available apps; and in the quarters following implementation, entry of new apps fell by half. We estimate a structural model of demand and entry in the app market. Comparing long-run equilibria with and without GDPR, we find that GDPR reduces consumer surplus and aggregate app usage by about a third. Whatever the privacy benefits of GDPR, they come at substantial costs in foregone innovation.”

“this paper empirically quantifies the effects of the enforcement of the EU’s General Data Protection Regulation (GDPR) on online user behavior over time, analyzing data from 6,286 websites spanning 24 industries during the 10 months before and 18 months after the GDPR’s enforcement in 2018. A panel differences estimator, with a synthetic control group approach, isolates the short- and long-term effects of the GDPR on user behavior. The results show that, on average, the GDPR’s effects on user quantity and usage intensity are negative; e.g., the numbers of total visits to a website decrease by 4.9% and 10% due to GDPR in respectively the short- and long-term. These effects could translate into average revenue losses of $7 million for e-commerce websites and almost $2.5 million for ad-based websites 18 months after GDPR. The GDPR’s effects vary across websites, with some industries even benefiting from it; moreover, more-popular websites suffer less, suggesting that the GDPR increased market concentration.”

“This paper investigates the impact of the General Data Protection Regulation (GDPR for short) on consumers’ online browsing and search behavior using consumer panels from four countries, United Kingdom, Spain, United States, and Brazil. We find that after GDPR, a panelist exposed to GDPR submits 21.6% more search terms to access information and browses 16.3% more pages to access consumer goods and services compared to a non-exposed panelist, indicating higher friction in online search. The implications of increased friction are heterogeneous across firms: Bigger e-commerce firms see an increase in consumer traffic and more online transactions. The increase in the number of transactions at large websites is about 6 times the increase experienced by smaller firms. Overall, the post-GDPR online environment may be less competitive for online retailers and may be more difficult for EU consumers to navigate through.”

“Privacy regulations should increase trust because they provide laws that increase transparency and allow for punishment in cases in which the trustee violates trust. […] We collected survey panel data in Germany around the implementation date and ran a survey experiment with a GDPR information treatment. Our observational and experimental evidence does not support the hypothesis that the GDPR has positively affected trust. This finding and our discussion of the underlying reasons are relevant for the wider research field of trust, privacy, and big data.”

“We follow more than 110,000 websites and their third-party HTTP requests for 12 months before and 6 months after the GDPR became effective and show that websites substantially reduced their interactions with web technology providers. Importantly, this also holds for websites not legally bound by the GDPR. These changes are especially pronounced among less popular websites and regarding the collection of personal data. We document an increase in market concentration in web technology services after the introduction of the GDPR: Although all firms suffer losses, the largest vendor — Google — loses relatively less and significantly increases market share in important markets such as advertising and analytics. Our findings contribute to the discussion on how regulating privacy, artificial intelligence and other areas of data governance relate to data minimization, regulatory competition, and market structure.”

William Rinehart of the Center for Growth and Opportunity has compiled and summarized many additional studies that document the costs associated with restrictions on data, including many state privacy laws imposed in the United States.

“The Biggest Loser”: Innovation Culture Gone Wrong

Taken together, this evidence makes it clear that, “Well-meaning privacy laws can have the unintended consequence of penalizing smaller companies within technology markets.” It can also have broader geopolitical ramifications for continental competitive advantage and engagement between countries. Some have argued that the United Kingdom’s so-called “Brexit” from the EU can be viewed as not only an effort to reclaim its sovereignty but more specifically “to escape its crippling regulatory structure.” The E.U.’s approach to emerging technology regulation likely had some bearing on this. Acs argues that Britain’s move was logical, “because E.U. regulations were holding back the U.K.’s strong DPE (digital platform economy).” “If the United Kingdom was to realize its economic potential,” he says, “it had to extricate itself from the European Union,” due to the growing “dysfunctional E.U. bureaucracy.”

Can Europe turn things around? Most market watchers do not believe that the E.U. will be willing to change its regulatory course in such a way that the continent would suddenly become more open to data-driven innovation. As part of a Spring 2022 journal symposium, The International Economy asked 11 experts from Europe and the U.S. to consider where the European Union currently stood in “the global tech race.” The responses were nearly unanimous and bluntly summarized in the symposium’s title: “The Biggest Loser.” Several respondents observed how “Europe is considered to be lagging behind in the global tech race,” and “is unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another respondent concluded. Europe’s risk-averse culture and preference for meticulously detailed and highly precautionary regulatory regimes were repeatedly cited as factors.

Europe has become the biggest loser on the digital technology front not because of their people but because of their policy. Europe is filled with some of the most important advanced education and engineering programs in the world, and countless brilliant minds there could be leading world-leading digital technology companies that could rival the U.S., China, and the rest of the world. But Europe’s current “innovation culture” simply will not allow it.

Innovation culture refers to “the various social and political attitudes and pronouncements towards innovation, technology, and entrepreneurial activities that, taken together, influence the innovative capacity of a culture or nation.” A positive innovation culture depends upon a dynamic, open economy that encourages new entry, entrepreneurialism, continuous investment, and the free movement of goods, ideas, and talent.

At this point in time, it is clear that — at least for data-driven sectors — the E.U. has created the equivalent of an anti-innovation culture, and the GDPR has clearly played a major rule in that outcome. This regulatory regime has also had devastating consequences for venture capital formation and investment more generally in Europe. “Public policy and attitudes explain the relative technological decline and lack of economic dynamism,” Petit and Teece argue, and it has resulted in, “weak venture capital markets, fragmented research capabilities, low worker mobility and frustrated entrepreneurs.”

Industrial Policy Won’t Save Europe

While the E.U. is aggressively regulating data-driven sectors, it is simultaneously trying to use industrial policy programs to advance new technological capabilities and innovations. European policymakers would obviously like to avoid a repeat of the past quarter century and the lack of digital technology competition and innovation they witnessed.

But past European industrial policy efforts on the digital technology front have largely failed, as Connor Haaland and I documented earlier. Zoltan Acs notes that, despite many state efforts to promote digital innovation across the continent in recent decades, the E.U.’s regulatory policies have resulted in the opposite. “The European Union protected traditional industries and hoped that existing firms would introduce new technologies. This was a policy designed to fail,” he argues. A major recent book, Questioning the Entrepreneurial State: Status-quo, Pitfalls, and the Need for Credible Innovation Policy (Springer, 2022), offers additional evidence of the failure of European industrial policy efforts. No amount of industrial policy planning and spending is going to be able to overcome a negative innovation culture that suffocates entrepreneurialism and investment out of the gates.

These findings have lessons for policymakers in the United States, too, especially with President Biden and even many Republicans now calling for heavy-handed top-down regulation of digital technology companies. Basically, “President Biden Wants America to Become Europe on Tech Regulation,” I argued in a recent R Street Institute blog post. In a letter to the Wall Street JournalI responded to recent opeds by both President Biden and former Trump Administration Attorney General William Barr in which they both advocated regulations that would take us down the disastrous path that the European Union has already charted.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.” This is the wrong innovation culture for the United States if we hope to be a leader in the Computational Revolution that is unfolding — and match expanding efforts by the Chinese to top us at it.

In closing, policymakers should never lose sight of the most fundamental lesson of innovation policy, which can be stated quite simply: You only get as much innovation as you allow to begin with. If the public policy defaults are all set to be maximally restrictive and limit entrepreneurialism and experimentation by design, then it should be no surprise when the country or continent fails to generate meaningful innovation, investment, new companies, and global competitive advantage. The European model is no model for America.

Additional reading:

]]>
https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/feed/ 0 77086
Self-Inflicted Technological Suicide https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/ https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/#respond Fri, 27 Jan 2023 00:26:11 +0000 https://techliberation.com/?p=77077

The Wall Street Journal has run my response to troubling recent opeds by President Biden (“Republicans and Democrats, Unite Against Big Tech Abuses“) and former Trump Administration Attorney General William Barr (“Congress Must Halt Big Tech’s Power Grab“) in which they both called for European-style regulation of U.S. digital technology markets.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “[T]he EU’s big-government regulatory crusade against digital tech: Stagnant markets, limited innovation and a dearth of major players. Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.”

Thus, the Biden and Barr plans for importing European-style tech mandates, “would be a stake through the heart of the ‘permissionless innovation’ that made America’s info-tech economy a global powerhouse.” In a longer response to the Biden oped that I published on the R Street blog, I note that:

“It is remarkable to think that after years of everyone complaining about the lack of bipartisanship in Washington, we might get the one type of bipartisanship America absolutely does not need: the single most destructive technological suicide in U.S. history, with mandates being substituted for markets, and permission slips for entrepreneurial freedom.”

What makes all this even more remarkable is that they calls for hyper-regulation come at a time when China is challenging America’s dominance in technology and AI. Thus, “new mandates could compromise America’s lead,” I conclude. “Shackling our tech sectors with regulatory chains will hobble our nation’s ability to meet global competition and undermine innovation and consumer choice domestically.”

Jump over to the WSJ to read my entire response (“EU-Style Regulation Begets EU-Style Stagnation“) and to the R Street blog for my longer essay (“President Biden Wants America to Become Europe on Tech Regulation“).

]]>
https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/feed/ 0 77077
Gonzalez v Google, Section 230 & the Future of Permissionless Innovation https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/ https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/#respond Fri, 09 Dec 2022 13:15:15 +0000 https://techliberation.com/?p=77066

Over at Discourse magazine this week, my R Street colleague Jonathan Cannon and I have posted a new essay on how it has been “Quite a Fall for Digital Tech.” We mean that both in the sense that the last few months have witnessed serious market turmoil for some of America’s leading tech companies, but also that the political situation for digital tech more generally has become perilous. Plenty of people on the Left and the Right now want a pound of flesh from the info-tech sector, and the starting cut at the body involves Section 230, the 1996 law that shields digital platforms from liability for content posted by third parties.

With the Supreme Court recently announcing it will hear Gonzalez v. Google, a case that could significantly narrow the scope of Section 230, the stakes have grown higher. It was already the case that federal and state lawmakers were looking to chip away at Sec. 230’s protections through an endless variety of regulatory measures. But if the Court guts Sec. 230 in Gonzalez, then it will really be open season on tech companies, as lawsuits will fly at every juncture whenever someone does not like a particular content moderation decision. Cannon and I note in our new essay that,

if the court moves to weaken liability protections for digital platforms, the ramifications will be profoundly negative. While many critics today complain that the law’s liability protections have been too generous, the reality is that Section 230 has been the legal linchpin supporting the permissionless innovation model that fueled America’s commanding lead in the digital information revolution. Thanks to the law, digital entrepreneurs have been free to launch bold new ideas without fear of punishing lawsuits or regulatory shenanigans. This has boosted economic growth and dramatically broadened consumer information and communications options.

Many critics of Sec. 230 claim that reforms are needed to “rein in Big Tech.” But, ironically, gutting Sec. 230 would probably only make big tech companies even bigger because the smaller players in the market would struggle to deal with the mountains of regulations and lawsuits that would come about in its absence. Cannon and I continue on to explore what it means for the next generation of online innovators if these court cases go badly and Section 230 is scaled back or gutted:

Section 230 has been a legal cornerstone of the entire ecosystem. All the large-scale platforms we depend on for our online experience would never have gotten off the ground without its protection. […] More importantly, these platforms have relied on being able to host third-party content without fear of opening a Pandora’s box of private litigation and endless challenges from governments. By removing these protections, platforms will be forced to significantly increase their moderation practices to reduce risk of suits from zealous litigants. Besides the chilling effect this will have on speech, it also will put up a cost-prohibitive barrier for smaller entrants who lack the resources to have an army of content moderators to find and eliminate undesirable content.

The broader effect on market dynamism and the nation’s technological competitiveness will be profound as permissionless innovation is replaced by mountains of top-down permission slips. “If America’s digital sector gets kneecapped by the Supreme Court, or if new regulations or legislative proposals scale back Section 230 protections, it will be significantly more difficult for U.S. firms to continue to lead in the development and commercialization of new technologies,” we conclude.

Jump over to Discourse to read the entire piece.

]]>
https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/feed/ 0 77066
Why the Endless Techno-Apocalyptica in Modern Sci-Fi? https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/ https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/#respond Fri, 02 Sep 2022 15:06:06 +0000 https://techliberation.com/?p=77033

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

How Science Fiction Dystopianism Shapes the Debate over AI & Robotics

[Originally ran on Discourse on July 26, 2022.]

George Jetson will be born this year. We don’t know the exact date of this fictional cartoon character’s birth, but thanks to some skillful Hanna-Barbera hermeneutics the consensus seems to be sometime in 2022.

In the same episode that we learn George’s approximate age, we’re also told the good news that his life expectancy in the future is 150 years old. It was one of the many ways The Jestons, though a cartoon for children, depicted a better future for humanity thanks to exciting innovations. Another was a helpful robot named Rosie, along with a host of other automated technologies—including a flying car—that made George and his family’s life easier.

 

Most fictional portrayals of technology today are not as optimistic as  The Jetsons, however. Indeed, public and political conceptions about artificial intelligence (AI) and robotics in particular are being strongly shaped by the relentless dystopianism of modern science fiction novels, movies and television shows. And we are worse off for it.

AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth and profoundly transform a diverse array of sectors, while providing humanity with countless technological improvements in medicine and healthcarefinancial servicestransportationretailagricultureentertainmentenergyaviationthe automotive industry and many others. Indeed, these technologies are already deeply embedded in these and other industries and making a huge difference.

But that progress could be slowed and in many cases even halted if public policy is shaped by a precautionary-principle-based mindset that imposes heavy-handed regulation based on hypothetical worst-case scenarios. Unfortunately, the persistent dystopianism found in science fiction portrayals of AI and robotics conditions the ground for public policy debates, while also directing attention away from some of the more real and immediate issues surrounding these technologies.

Incessant Dystopianism Untethered from Reality

In his recent book Robots, Penn State business professor John Jordan observes how over the last century “science fiction set the boundaries of the conceptual playing field before the engineers did.” Pointing to the plethora of literature and film that depicts robots, he notes: “No technology has ever been so widely described and explored before its commercial introduction.” Not the internet, cell phones, atomic energy or any others.

Indeed, public conceptions of these technologies, and even the very vocabulary of the field, has been shaped heavily by sci-fi plots beginning a hundred years ago with the 1920 play  R.U.R. (Rossum’s Universal Robots)which gave us the term “robot,” and Fritz Lang’s 1927 silent film Metropolis, with its memorable Maschinenmensch, or “machine-human.” There has been a deep and rich imagination surrounding AI and robotics since then, but it has tended to be mostly negative and has grown more hostile over time.

The result has been a public and policy dialogue about AI and robotics that is focused on an endless parade of horribles about these technologies. Not surprisingly, popular culture also affects journalistic framings of AI and robotics. Headlines breathlessly scream of how “Robots May Shatter the Global Economic Order Within a Decade,” but only if we’re not dead already because… “If Robots Kill Us, It’s Because It’s Their Job.”

Dark depictions of AI and robotics are ever-present in popular modern sci-fi movies and television shows. A short list includes:  2001: A Space Odyssey, Avengers: Age of Ultron, Battlestar Galactica (both the 1978 original and the 2004 reboot), Black Mirror, Blade Runner, Ex Machina, Her, The Matrix, Robocop, The Stepford Wives, Terminator, Transcendence, Tron, WALL-E, Wargames and Westworld, among countless others. The least nefarious plots among these films and television shows rest on the idea that AI and robotics are going to drive us to a life of distraction, addiction or sloth. In more extreme cases, we’re warned about a future in which we are either going to be enslaved or destroyed by our new robotic or algorithmic overlords.

Don’t get me wrong; the movies and shows on the above list are some of my favorites.  2001 and Blade Runner are both in my top 5 all-time flicks, and the reboot of Battlestar is one of my favorite TV shows. The plots of all these movies and shows are terrifically entertaining and raise many interesting issues that make for fun discussions.

But they are not representative of reality. In fact, the vast majority of computer scientists and academic experts on AI and robotics agree that claims about machine “superintelligence” are wildly overplayed and that there is no possibility of machines gaining human-equivalent knowledge any time soon—or perhaps ever. “In any ranking of near-term worries about AI, superintelligence should be far down the list,” argues Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.

Contra the  Terminator-esque nightmares envisioned in so many sci-fi plots, MIT roboticist Rodney Brooks says that “fears of runaway AI systems either conquering humans or making them irrelevant aren’t even remotely well grounded.” John Jordan agrees, noting: “The fear and uncertainty generated by fictional representations far exceed human reactions to real robots, which are often reported to be ‘underwhelming.’”

The same is true for AI more generally. “A close inspection of AI reveals an embarrassing gap between actual progress by computer scientists working on AI and the futuristic visions they and others like to describe,” says Erik Larson, author of, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Larson refers to this extreme thinking about superintelligent AI as “technological kitsch,” or exaggerated sentimentality and melodrama that is untethered from reality. Yet, the public imagination remains captivated by tales of impending doom.

Seeding the Ground with Misery and Misguided Policy

But isn’t it all just harmless fun? After all, it’s just make believe. Moreover, can’t science fiction—no matter how full of techno-misery—help us think through morally weighty issues and potential ethical conundrums involving AI and robotics?

Yes and no. Titillating fiction has always had a cathartic element to it and helped us cope with the unknown and mysterious. Most historians believe it was Aristotle in his Poetics who first used the term katharsis when discussing how Greek tragedies helped the audience “through pity and fear effecting the proper purgation of these emotions.”

But are modern science fiction depictions of AI and robotics helping us cope with technological change, or instead just stoking a constant fear of it? Modern sci-fi isn’t so much purging negative emotion about the topic at hand as it is endlessly adding to the sense of dread surrounding these technologies. What are the societal and political ramifications of a cultural frame of reference that suggests an entire new class of computational technologies will undermine rather than enrich our human experiences and, possibly, our very existence?

The New Yorker’s Jill Lepore says we live in “A Golden Age for Dystopian Fiction,” but she worries that this body of work “cannot imagine a better future, and it doesn’t ask anyone to bother to make one.” She argues this “fiction of helplessness and hopelessness” instead “nurses grievances and indulges resentments” and that “[i]ts only admonition is: Despair more.” Lapore goes so far as to claim that, because “the radical pessimism of an unremitting dystopianism” has appeal to many on both the left and right, it “has itself contributed to the unravelling of the liberal state and the weakening of a commitment to political pluralism.”

I’m not sure dystopian fiction is driving the unravelling of pluralism, but Lapore is on to something when she notes how a fiction rooted in misery about the future will likely have political consequences at some point.

Techno-panic Thinking Shapes Policy Discussions

The ultimate question is whether public policy toward new AI and robotic technologies will be shaped by this hyperpessimistic thinking in the form of precautionary principle regulation, which essentially treats innovations as “guilty until proven innocent” and seeks to intentionally slow or retard their development.

If the extreme fears surrounding AI and robotics  do inspire precautionary controls—as they already have in the European Union—then we need to ask how the preservation of the technological status quo could undermine human well-being by denying society important new life-enriching and life-saving goods and services. Technological stasis does not provide a safer or healthier society, but instead holds back our collective ability to innovate, prosper and better our lives in meaningful ways.

Louis Anslow, curator of  Pessimists Archive calls this “the Black Mirror fallacy,” referencing the British television show that has enjoyed great success peddling tales of impending techno-disasters. Anslow defines the fallacy as follows: “When new technologies are treated as much more threatening and risky than old technologies with proven risks/harms. When technological progress is seen as a bigger threat than technological stagnation.”

Anslow’s Pessimists Archive collects real-world case studies of how moral panic and techno-panics have accompanied the introduction of new inventions throughout history. He notes, “Science fiction has conditioned us to be hypervigilant about avoiding dystopias born of technological acceleration and totally indifferent to avoiding dystopias born of technological stagnation.”

Techno-panics can have real-world consequences when they come to influence policymaking. Robert Atkinson, president of the Information Technology & Innovation Foundation (ITIF), has documented the many ways that “the social and political commentary [about AI] has been hype, bordering on urban myth, and even apocalyptic.” The more these attitudes and arguments come to shape policy considerations, the more likely it is precautionary principle-based recommendations will drive AI and robotics policy, preemptively limiting their potential. ITIF has published a report documenting “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” identifying how it will slow algorithmic advances in key sectors.

Similarly, in his important recent book Where Is My Flying Car ?, scientist J. Storrs Hall documents how “regulation clobbered the learning curve” for many important technologies in the U.S. over the last half century, especially nuclear, nanotech and advanced aviation. Society lost out on many important innovations due to endless bureaucratic delays, often thanks to opposition from special interests, anti-innovation activists, overzealous trial lawyers and a hostile media. Hall explained how this also sent a powerful signal to talented young people who might have been considering careers in those sectors. Why go into a field demonized by so many and where your creative abilities will be hamstrung by precautionary constraints?

Disincentivizing Talent

Hall argues that in those crucial sectors, this sort of mass talent migration “took our best and brightest away from improving our lives,” and he warns that those who still hope to make a career in such fields should be prepared to be “misconstrued and misrepresented by activists, demonized by ignorant journalists, and strangled by regulation.”

Is this what the future holds for AI and robotics? Hopefully not, and America continues to generate world-class talent on this front today in a diverse array of businesses and university programs. But if the waves of negativism about AI and robotics persist, we shouldn’t be surprised if it results in a talent shift away from building these technologies and toward fields that instead look to restrict them.

For example, Hall documents how, following the sudden shift in public attitudes surrounding nuclear power 50 years ago, “interests, and career prospects, in nuclear physics imploded” and “major discoveries stopped coming.” Meanwhile, enrollment in law schools and other soft sciences typically critical of technological innovation enjoyed greater success. Nobody writes any sci-fi stories about what a disaster that development has been for innovation in the energy sphere, even though it is now abundantly clear how precautionary principle policies have undermined environment goals and human welfare, with major geopolitical consequences for many nations.

If America loses the talent race on the AI front, it has ramifications for global competitive advantage going forward, especially as China races to catch up. In a world of global innovation arbitrage, talent and venture capital will flow to wherever it is treated most hospitably. Demonizing AI and robotics won’t help recruit or retain the next generation of talent and investors America needs to remain on top.

Flipping the Script

Some folks have had enough of the relentless pessimism surrounding technology and progress in modern science fiction and are trying to do something to reverse it. In a 2011  Wired essay decrying the dangers of “Innovation Starvation,” the acclaimed novelist Neal Stephenson decried the fact that “the techno-optimism of the Golden Age of [science fiction] has given way to fiction written in a generally darker, more skeptical and ambiguous tone.” While good science fiction, “supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place,” Stephenson said modern sci-fi was almost entirely focused on its potential downsides.

To help reverse this trend, Stephenson worked with the Center for Science and the Imagination at Arizona State University to launch Project Hieroglyph, an effort to support authors willing to take a more optimistic view of the future. It yielded a 2014 book, Hieroglyph: Stories and Visions for a Better Future that included almost 20 contributors. Later, in 2018, The Verge launched the “Better Worlds” project to support 10 writers of “stories that inspire hope” about innovation and the future. “Contemporary science fiction often feels fixated on a sort of pessimism that peers into the world of tomorrow and sees the apocalypse looming more often than not,” said Verge culture editor Laura Hudson when announcing the project.

Unfortunately, these efforts have not captured much public attention and that’s hardly surprising. “Pessimism has always been big box office,” says science writer Matt Ridley, primary because it really is more entertaining. Even though many of great sci-fi writers of the past, including Isaac Asimov, Arthur C. Clarke, and Robert Heinlein, wrote positively about technology, they ultimately had more success selling stories with darker themes. It’s just the nature of things more generally, from the best of Greek tragedy to Shakespeare and on down the line. There’s a reason they’re still rebooting Beowulf all these years later, after all.

So, There’s Star Trek and What Else?

While technological innovation will never enjoy the respect it deserves for being the driving force behind human progress, one can at least hope that more pop culture treatments of it might give it a fair shake. When I ask crowds of people to name a popular movie or television show that includes mostly positive depictions of technology, Star Trek is usually the first (and sometimes the only) thing people mention. It’s true that, on balance, technology was treated as a positive force in the original series, although “V’Ger”—a defunct space probe that attains a level of consciousness—was the prime antagonist in Star Trek: The Motion Picture. Later, Star Trek: The Next Generation gave us the always helpful android Data, but also created the lasting mental image of the Borg, a terrifying race of cyborgs hell-bent on assimilating everyone into their hive mind.

The Borg provided some of The Next Generation’s most thrilling moments, but also created a new cultural meme, with tech critics often worrying about how today’s humans are being assimilated into the hive mind of modern information systems. Philosopher Michael Sacasas even coined the term “the Borg Complex,” to refer to a supposed tendency “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” After years of a friendly back-and-forth with Sacasas, I felt compelled to even wrap up my book Permissionless Innovation with a warning to other techno-optimists not to fall prey to this deterministic trap when defending technological change. Regardless of where one falls on that issue, the fact that Sacasas and I were having a serious philosophical discussion premised on a famous TV plotline serves as another indication of how much science fiction shapes public and intellectual debate over progress and innovation.

And, truth be told, some movies know how to excite the senses without resorting to dystopianism.  Interstellar and The Martian are two recent examples that come to mind. Interestingly, space exploration technologies themselves usually get a fair shake in many sci-fi plots, often only to be undermined by onboard Ais or androids, as occurred not only in 2001 with the eerie HAL 9000, but also Alien.

There are some positive (and sometimes humorous) depictions of robots as in  Robot & Frank, or touching ones as in Bicentennial Man. Beyond The Jetsons, other cartoons like Iron Giant and Big Hero 6 offer more kindly visions of robots. KITT, a super-intelligent robot car, was Michael Knight’s dependable ally in NBC’s Knight Rider. And R2-D2 is always a friendly helper throughout the Star Wars franchise. But generally speaking, modern sci-fi continues to churn out far more negativism about AI and robotics.

What If We Took It All Seriously?

So long as the public and political imagination is spellbound by machine machinations that dystopian sci-fi produces, we’ll be at risk of being stuck with absurd debates that have no meaningful solution other than “Stop the clock!” or “Ban it all!” Are we really being assimilated into the Borg hive mind, or just buying time until a coming robopocalypse grinds us into dust (or dinner)?

If there was a kernel of truth to any of this, then we should adopt some of the extreme solutions, Nick Bostrom of Oxford suggests in his writing on these issues. Those radical steps include worldwide surveillance and enforcement mechanisms for scientists and researchers developing algorithmic and robotic systems, as well as some sort of global censorship of information about these capabilities to ensure the technology is not used by bad actors.

To Bostrom’s great credit, he is at least willing to tell us how far he’d go. Most of today’s tech critics prefer to just spread a gospel of gloom and doom and suggest  something must be done, without getting into the ugly details about what a global control regime for computational science and robotic engineering looks like. We should reject such extremist hypothesizing and understand that silly sci-fi plots, bombastic headlines and kooky academic writing should not be our baseline for serious discussions about the governance of artificial intelligence and robotics.

At the same time, we absolutely should consider what downsides any technology poses for individuals and society. And, yes, some precautions will be needed of a regulatory nature. But most of the problems envisioned by sci-fi writers are not what we should be concerned with. There are far more specific and nuanced problems AI and robotics confronts us with today that deserve more serious consideration and governance steps. How to program safer drones and driverless cars, improve the accuracy of algorithmic medical and financial technologies, and ensure better transparency for government uses of AI are all more mundane but very important issues that require reasoned discussion and balanced solutions today. Dystopian thinking gives us no roadmap to get there other than extreme solutions.

Imagining a Better Future

The way forward here is neither to indulge in apocalyptic fantasies nor pollyannaish techno-optimism, but to approach these technologies with reasoned risk analysis, sensible industry best practices, educational efforts and other agile governance steps. In a forthcoming book on flexible governance strategies for AI and robotics, I outline how these and other strategies are already being formulated to address real-world challenges in fields as diverse as driverless cars, drones, machine learning in medicine and much more.

A wide variety of ethical frameworks, offered by professional associations, academic groups and others, already exists to “bake in” best practices and align AI design with widely shared goals and values while also “keeping humans in the loop” at critical stages of the design process to ensure that they can continue to guide and occasionally realign those values and best practices as needed.

When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc.), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less. It is only through constant trial and error that humanity discovers better  and safer ways of satisfying important wants and needs.

These are complicated and nuanced issues that demand tailored and iterative governance responses. But this should not be done using inflexible, innovation-limiting mandates. Concerns about AI dangers deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.

So, enjoy your next dopamine hit of sci-fi hysteria—I know I will, too. But don’t let that be your guide to the world that awaits us. Even if most sci-fi writers can’t imagine a better future, the rest of us can.

]]>
https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/feed/ 0 77033
AI Governance “on the Ground” vs “on the Books” https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/ https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/#respond Wed, 24 Aug 2022 15:14:56 +0000 https://techliberation.com/?p=77028

[Cross-posted from Medium]

There are two general types of technological governance that can be used to address challenges associated with artificial intelligence (AI) and computational sciences more generally. We can think of these as “on the ground” (bottom-up, informal “soft law”) governance mechanisms versus “on the books” (top-down, formal “hard law”) governance mechanisms.

Unfortunately, heated debates about the latter type of governance often divert attention from the many ways in which the former can (or already does) help us address many of the challenges associated with emerging technologies like AI, machine learning, and robotics. It is important that we think harder about how to optimize these decentralized soft law governance mechanisms today, especially as traditional hard law methods are increasingly strained by the relentless pace of technological change and ongoing dysfunctionalism in the legislative and regulatory arenas.

On the Grounds vs. On the Books Governance

Let’s unpack these “on the ground” and “on the books” notions a bit more. I am borrowing these descriptors from an important 2011 law review article by Kenneth A. Bamberger and Deirdre K. Mulligan, which explored the distinction between what they referred to as “Privacy on the Books and on the Ground.” They identified how privacy best practices were emerging in a decentralized fashion thanks to the activities of corporate privacy officers and privacy associations who helped formulate best practices for data collection and use.

The growth of privacy professional bodies and non­profit organizations — especially the International Association of Privacy Profession­als (IAPP) — helped better formalize privacy best practices by establishing and certifying internal champions to uphold key data-handling principles with organizations. By 2019, the IAPP had over 50,000 trained members globally, and its numbers keep swelling. Today, it is quite common to find Chief Privacy Officers throughout the corporate, governmental, and non-profit world.

These privacy professionals work together and in conjunction with a wide diversity of other players to “bake-in” widely-accepted information collection/ use practices within all these organizations. With the help of IAPP and other privacy advocates and academics, these professionals also look to constantly refine and improve their standards to account for changing circumstances and challenges in our fast-paced data economy. They also look to ensure that organizations live up to commitments they have made to the public or even governments to abide by various data-handling best practices.

Soft Law vs. Hard Law

These “on the ground” efforts have helped usher in a variety of corporate social responsibility best practices and provide a flexible governance model that can be a compliment to, or sometimes even a substitute for, formal “on the books” efforts. We can also think of this as the difference between soft law and hard law.

Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Soft law can take many forms, including guidelines, best practices, agency consultations & workshops, multistakeholder initiatives, and other experimental types of decentralized, non-binding commitments and efforts.

Soft law has become a bit of a gap-filler in the U.S. as hard law efforts fail for various reasons. The most obvious explanations for why the role of hard law governance has shrunk is that it’s just very hard for law to keep up with fast-moving technological developments today. This is known as the pacing problem. Many scholars have identified how the pacing problem gives rise to a “governance gap” or “competency trap” for policymakers because, just as quickly as they are coming to grips with new technological developments, other technologies are emerging quickly on their heels.

Think of modern technologies — especially informational and computational technologies — like a series of waves that come flowing in to shore faster and faster. As soon as one wave crests and then crashes down, another one comes right after it and soaks you again before you’ve had time to recover from the daze of the previous ones hitting you. In a world of combinatorial innovation, in which technologies build on top of one another in a symbiotic fashion, this process becomes self-reinforcing and relentless. For policymakers, this means that just when they’ve worked their way up one technological learning curve, the next wave hits and forces them to try to quickly learn about and prepare for the next one that has arrived. Lawmakers are often overwhelmed by this flood of technological change, making it harder and harder for policies to get put in place in a timely fashion — and equally hard to ensure that any new or even existing policies stay relevant as all this rapid-fire innovation continues.

Legislative dysfunctionalism doesn’t help. Congress has a hard time advancing bills on many issues, and technical matters often get pushed to the bottom of the priorities list. The end result is that Congress has increasingly become a non-actor on tech policy in the U.S. Most of the action lies elsewhere.

What’s Your Backup Plan?

This means there is a powerful pragmatic case for embracing soft law efforts that can at least provide us with some “on the ground” governance efforts and practices. Increasingly, soft law is filling the governance gap because hard law is failing for a variety of reasons already identified. Practically speaking, even if you are dead set on imposing a rigid, top-down, technocratic regulatory regime on any given sector or technology, you should at least have a backup plan in mind if you can’t accomplish that.

This is why privacy governance in the United States continues to depend heavily on such soft law efforts to fill the governance vacuum after years of failed attempts to enact a formal federal privacy law. While many academics and others continue to push for such an over-arching data handling law, bottom-up soft law efforts have played an important role in balancing privacy and innovation.

In a similar way, “on the ground” governance efforts are already flourishing for artificial intelligence and machine learning as policymakers continue to very slowly consider whether new hard law initiatives are wise or even possible. For example, congressional lawmakers have been considering a federal regulatory framework for driverless cars for the past several sessions of Congress. Many people in Congress and in academic circles agree that a federal framework is needed, if for no other reason than to preempt the much-dreaded specter of a patchwork of inconsistent state and local regulatory policies. With so much bipartisan agreement out there on driverless car legislation, it would seem like a federal bill would be a slam dunk. For that reason, year in and year out, people always predict: this is the year we’ll get driverless car legislation! And yet, it never happens due to a combination of special interest opposition from unions and trial lawyers, in addition to the pacing problem issue and Congress focusing its limited attention on other issues.

This is also already true for algorithmic regulation. We hear lots of calls to do something, but it remains unclear what that something is or whether it will get done any time soon. If we could not get a privacy bill through Congress after at least a dozen years of major efforts, chances are that broad-based AI regulation is going to be equally challenging.

Soft Law for AI is Exploding

Thus, soft law will likely fill the governance gap for AI. It already is. I’m working on a new book that documents the astonishing array of soft law mechanisms already in place or being developed to address various algorithmic concerns. I can’t seem to finish the book because there is just so much going on related to soft law governance efforts for algorithmic systems. As Mark Coeckelbergh noted in his recent book on AI Ethics, there’s been an “avalanche of​ initiatives and policy documents” around AI ethics and best practices in recent years. It is a bit overwhelming, but the good news is that there is a lot of consistency in these governance efforts.

To illustrate, a 2019 survey by a group of researchers based in Switzerland analyzed 84 AI ethical frameworks and found “a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy).” A more recent 2021 meta-survey by a team of Arizona State University (ASU) legal scholars reviewed an astonishing 634 soft law AI programs that were formulated between 2016–2019. 36 percent of these efforts were initiated by governments, with the others being led by non-profits or private sector bodies. Echoing the findings from the Swiss researchers, the ASU report found widespread consensus among these soft law frameworks on values such as transparency and explainability, ethics/rights, security, and bias. This makes it clear that there is considerable consistency among ethical soft law frameworks in that most of them focus on a core set of values to embed within AI design. The UK-based Alan Turing Institute boils their list down to four “FAST Track Principles”: Fairness, Accountability, Sustainability, and Transparency.

The ASU scholars noted how ethical best practices for product design already influence developers today by creating powerful norms and expectations about responsible product design. “Once a soft law program is created, organizations may seek to enforce it by altering how their employees or representatives perform their duties through the creation and implementation of internal procedures,” they note. “Publicly committing to a course of action is a signal to society that generates expectations about an organization’s future actions.”

This is important because many major trade associations and individual companies have been formulating governance frameworks and ethical guidelines for AI development and use. For example, among large trade associations, the U.S. Chamber of Commerce, the Business Roundtable, the BSA | The Software Alliance, and ACT (The App Association) have all recently released major AI best practice guidelines. Notable corporate efforts to adopt guidelines for ethical AI practices include statements or frameworks by IBM, Intel, GoogleMicrosoftSalesforceSAP, and Sony, to just name a few. They are also creating internal champions to push AI ethics though either the appointment of Chief Ethical Officers, the creation of official departments, or both plus additional staff to guide the process of baking-in AI ethics by design.

Once again, there is remarkable consistency among these corporate statements in terms of the best practices and ethical guidelines they endorse. Each trade association or corporate set of guidelines align closely with the core values identified in the hundreds of other soft law frameworks that ASU scholars surveyed. These efforts go a long way toward helping to promote a culture of responsibility among leading AI innovators. We can think of this as the professionalization of AI best practices.

What Soft Law Critics Forget

Some will claim that “on the ground” soft law efforts are not enough, but they typically make two mistakes when saying so.

Their first mistake is thinking that hard law is practical or even optimal for fast-paced, highly mercurial AI and ML technologies. It’s not just that the pacing problem necessitates new thinking about governance. Critics fail to understand how hard law would likely significantly undermine algorithmic innovation because algorithmic systems can change by the minute and require a more agile and adaptive system of governance by their very nature.

This is a major focus of my book and I previously published a draft chapter from my book on “The Proper Governance Default for AI,” and another essay on “Why the Future of AI Will Not Be Invented in Europe.” These essays explain why a Precautionary Principle-oriented regulatory regime for algorithmic systems would stifle technological development, undermine entrepreneurialism, diminish competition and global competitive advantage, and even have a deleterious impact on our national security goals.

Traditional regulatory systems can be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. When innovators must seek special permission before they offer a new product or service, it raises the cost of starting a new venture and discourages activities that benefit society. We need to avoid that approach if we hope maximize the potential of AI-based technologies.

The second mistake that soft law critics make is that they fail to understand how many hard law mechanisms actually play a role in supporting soft law governance. AI applications already are regulated by a whole host of existing legal policies. If someone does something stupid or dangerous with AI systems, the Federal Trade Commission (FTC) has the power to address “unfair and deceptive practices” of any sort. And state Attorneys General and state consumer protection agencies also routinely address unfair practices and continue to advance their own privacy and data security policies, some of which are often more stringent than federal law.

Meanwhile, several existing regulatory agencies in the U.S. possess investigatory and recall authority that allows them to remove products from the market when certain unforeseen problems manifest themselves. For example, the National Highway Traffic Safety Administration (NHTSA), the Food & Drug Administration (FDA), and Consumer Product Safety Commission (CPSC) all possess broad recall authority that could be used to address risks that develop for many algorithmic or robotic systems. For example, NHTSA is currently using its investigative authority to evaluate Tesla’s claims about “full self-driving” technology and the agency has the power to take action against the company under existing regulations. Likewise, the FDA used its broad authority to crack down on genetic testing company 23andme many years ago. And CPSC and the FTC have broad authority to investigate claims made by innovators, and they’ve already used it. It’s not like our expansive regulatory state lacks considerable existing power to police new technology. If anything, the power of the administrative state is too broad and amorphous and it can be abused in certain instances.

Perhaps most importantly, our common law system can address other deficiencies with AI-based systems and applications using product defects law, torts, contract law, property law, and class action lawsuits. This is a better way of addressing risks compared to preemptive regulation of general-purpose AI technology because it at least allows the technologies to first develop and then see what actual problems manifest themselves. Better to treat innovators as innocent until proven guilty than the other way around.

There are other thorny issues that deserve serious policy consideration and perhaps even some new rules. But how risks are addressed matters deeply. Before we resort to heavy-handed, legalistic solutions for possible problems, we should exhaust all other potential remedies first.

In other words, “on the ground” soft law government mechanisms and ex post legal solutions should generally trump “ex ante (preemptive, precautionary) regulatory constraints. But we should look for ways to refine and improve soft law governance tools, perhaps through better voluntary certification and auditing regimes to hold developers to a high standard as it pertains to the important AI ethical practices we want them to uphold. This is the path forward to achieve responsible AI innovation without the heavy-handed baggage associated with more formalistic, inflexible, regulatory approaches that are ill-suited for complicated, rapidly-evolving computational and computing technologies.

___________________

Related Reading on AI & Robotics

]]>
https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/feed/ 0 77028
Why the Future of AI Will Not Be Invented in Europe https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/ https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/#comments Mon, 01 Aug 2022 18:28:40 +0000 https://techliberation.com/?p=77016

For my latest column in The Hill, I explored the European Union’s (EU) endlessly expanding push to regulate all facets of the modern data economy. That now includes a new effort to regulate artificial intelligence (AI) using the same sort of top-down, heavy-handed, bureaucratic compliance regime that has stifled digital innovation on the continent over the past quarter century.

The European Commission (EC) is advancing a new Artificial Intelligence Act, which proposes banning some AI technologies while classifying many others under a heavily controlled “high-risk” category. A new bureaucracy, the European Artificial Intelligence Board, will be tasked with enforcing a wide variety of new rules, including “prior conformity assessments,” which are like permission slips for algorithmic innovators. Steep fines are also part of the plan. There’s a lengthy list of covered sectors and technologies, with many others that could be added in coming years. It’s no wonder, then, that the measure has been labelled the measure “the mother of all AI laws” and analysts have argued it will further burden innovation and investment in Europe.

As I noted in my new column, the consensus about Europe’s future on the emerging technology front is dismal to put it mildly. The International Economy journal recently asked 11 experts from Europe and the U.S. where the EU currently stood in global tech competition. Responses were nearly unanimous and bluntly summarized by the symposium’s title: “The Biggest Loser.” Respondents said Europe is “lagging behind in the global tech race,” and “unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another analyst bluntly concluded.

That’s a grim assessment, but there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it. As I noted in my column, “the EU’s risk-averse culture and preference for paperwork compliance over entrepreneurial freedom” had serious consequences for continent-wide innovation.  I note in my recent column how:

After the continent piled on layers of data restrictions beginning in the mid-1990s, innovation and investment suffered. Regulation grew more complex with the 2018 General Data Protection Regulation (GDPR), which further limits data collection and use. As a result of all the red tape, the EU came away from the digital revolution with “the complete absence of superstar companies.” There are no serious European versions of Microsoft, Google, Facebook, Apple or Amazon. Europe’s leading providers of digital technology services today are American-based companies.

Let’s take a look at a few numbers that illustrate what’s happened in Europe’s tech sector over the past quarter century. Here’s an old KPGM breakdown of market caps for public Internet companies over an important 20 year period, from 1995 to 2015, when the digital technology marketplace was taking shape. Besides the remarkable amount of churn over that period (with only Apple appearing on both lists), the other notable thing is the complete absence of any European companies in 2015.

Next, here’s a chart I constructed using CB Insights data for global unicorns ($billion valued companies) from 2010 up through early 2022. It shows how the U.S. dominates fully half the list with China having a 16% share, but all of the European Union’s firms equal just a 9 percent slice of the world’s share.

If you want to see a per capita breakdown of VC investment by country, here’s a handy Crunchbase News chart. While the U.S. is geographically much larger than Europe, a breakdown of VC funding on a per capita basis reveals that only Estonia ($915B) and Sweden ($700B) have startup investment on par with America ($808B). No other European country has even half as much per capita VC investment as the U.S., and most don’t even have a quarter as much.

As we enter the “age of AI,” what will the EU’s same regulatory model for mean for AI, machine learning, and robotics in Europe? We do have some early data on that, too. Here’s a breakdown of AI-related VC activity and AI unicorn in 2021 from the recent State of AI Report 2021, with European countries already trailing far behind:

Also, here’s some data on recent AI investment by region from the latest Stanford “AI Index Report 2022” which again highlights a gap that is only growing larger:

It’s important to listen to what actual AI innovators across the Atlantic have to say about the new EU regulatory efforts. Just last month, the UK-based Coalition for a Digital Economy (Coadec), an advocacy group for Britain’s technology-led startups, published a report entitled, “What do AI Startups Want from Regulation?” Coadec surveyed its members to gauge their feelings about the EU’s proposed approach to AI regulation, as well as the UK’s. 76% of those startups said that their business model would be either negatively affected or become infeasible if the UK were to echo the EU by making AI developers liable, and an equal percentage said they had varying concerns about whether it’s technically even feasible to make their datasets “free of errors,” as the EU looks set to demand. Respondents also said they feared that the new AI Act would be particularly burdensome to small and mid-size entrepreneurs because they cannot afford to deal with the costly compliance hassles like the larger competitors they face. This would end of being a replay of the burdens they faced from GDPR, which decimated small businesses. “The experience of GDPR demonstrated how unclear, complex and expensive regulations drove many startups out of business, and disproportionately impact startups that survived–GDPR compliance cost startups significantly more than it did the Tech Giants,” the Coadec report concluded.

At least those UK-based innovators might be in a slightly better position post-Brexit with the British government now looking to chart a different–and much less burdensome–governance approach for digital technologies. In fact, the UK government recently released a major policy document on “Establishing a Pro-Innovation Approach to Regulating AI,” which makes a concerted effort to distinguish its approach from the EU’s. “We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI,” the report noted. “We want to encourage innovation and avoid placing unnecessary barriers in its way.” This is consistent with what the UK government has been saying on technology governance more generally. For example, in recent report advocating for Innovation Friendly Regulation, the UK government’s Regulatory Horizons Council argued that, when it comes to the regulation of emerging technologies like AI, “it is also necessary to consider the risk that the intervention itself poses.” “This would include the potential impact on benefits from a particular innovation that might be foregone; it would also include the potential creation of a ‘chilling effect’ on innovation more generally,” the Council concluded. Clearly, this approach to technology policy stands in stark contrast to the EU’s heavy-handed model. So, there is a chance that at least some innovators based in the UK can escape the EU’s regulatory hell.

What about AI innovators stuck on the European continent? What are they saying about the regulations they will soon face? The European DIGITAL SME Alliance, which is the largest network of small and medium sized enterprises (SMEs) in the European ICT sector, represents roughly 45,000 digital SMEs. In comments to the EC about the impact of the law, the Alliance highlighted how costly the AI Act’s conformity assessments and other regulations will be for smaller innovators. “This may put a burden on AI innovation” the Alliance argued, because smaller developers have limited financial and human resources of SMEs.” “[A] regulation that requires SMEs to make these significant investments, will likely push SMEs out of the market,” the group noted. “This is exactly the opposite of the intention to support a thriving and innovative AI ecosystem in Europe.” Moreover, “SMEs will not be able to pass on these costs to their customers in the final customer end pricing,” the Alliance correctly noted because, “[t[he market is global and highly competitive. Therefore, customers will choose cheaper solutions and Europe risks to be left behind in technology development and global competition.”

In March, the Alliance also hosted a forum on “The European AI Act and Digital SMEs,” which featured comments from some operators in this space. Some speakers were quite timid and you could sense that they might have feared pushing back too aggressively against the European Commission so as not to get on the bad side of regulators before the rules go into effect. But Mislav Malenica, Founder & CEO Mindsmiths didn’t pull any punches in his remarks. His company Mindsmiths is trying to build autonomous support systems in many different fields, but their ability to innovate and compete globally will be severely curtailed by the EU AI Act, he argued.

I usually don’t spend time transcribing people’s comments from events, but I went back and watched Malenica’s multiple times because his remarks are so powerful and I wanted to make sure others hear what he was saying. [Malenica’s opening comments during the event run from 42:29 to 49:34 of the video and then he has more to say during Q&A beginning at the 1:27:28 of the video.] Here’s a quick summary of a few of Malenica’s key points (listed chronologically):

  • “I’m not sure we are doing everything we can do actually to create an environment that’s innovation friendly.”
  • “we see a lot of uncertainty. We see fear.”
  • “basically we won’t be able to get funding here.”
  • while reading through the AI Act, he notes, “I don’t see start-ups being mentioned anywhere, and startups are the main vehicles of innovation.” […] “I find it very arrogant”
  • if AI Act becomes law, “what we’ll do in Europe is we’ll create a new market and that’s the AI markets based on fear,” and in how to just build products that avoid the wrath of government or lawsuits.
  • “we are really stifling innovation” and that means Europeans will have to import autonomous products from foreign companies instead of making them there.

Later, during in the Q&A period, Malenica notes how his first virtual currency startup had to use half it’s investment capital just dealing with regulatory compliance issues, and most venture capitalists wouldn’t get behind launching in Europe because of such legal hassles. He reflects upon what this mean for other innovators going forward as the EU prepares to expand their regulatory regime for AI sectors:

  • “I don’t think we’re missing talent. That’s just a consequence” of all the regulation. “We are missing a sense that you have opportunities here. If you the opportunities here, then the talent will come, the funding will come, and so on because people see that they’ll be able to make money, they’ll be able to build companies, and so on.”
  • “If we now take a look at the 10 biggest companies market capitalizations in the world, we’ll see that none of them comes actually from Europe” with U.S. tech companies dominating the list. “So, we missed that wave completely.” Why? “Because we didn’t inspire anyone to take action,” and that is about to happen for AI.
  • “We need to decide if we are going to be a land of opportunities, or will we be just consumers of other people’s tech, the same we are right now” for digital software and services.
  • “We’re already finding excuses for the loss” of the AI market, he argues.

Malenica’s comments are extraordinarily demoralizing if you care about innovation. Now, I’m an American and one way to look at this dismal situation is that, by hobbling its own startups and existing AI innovators, Europe is doing the U.S. another favor by essentially taking itself out of the running in next great global tech race. Europe’s actions may also mean that America gains many of their best and brightest if they come to the U.S. when looking to create the next great algorithmic service or application because they can’t do so in the EU. This is exactly what happened over the past few decades for Internet startups, Malenica noted.

But that’s dismal news in another sense. Europe is filled with brilliant innovators, highly-skilled talent, world-class educational institutions, and even many venture capitalists looking to invest in this arena. Unfortunately, the continent’s suffocating regulatory approach makes it nearly impossible for digital technology innovators to have a fighting chance. Through their heavy-handed policies, European officials have essentially declared their innovators “guilty until proven innocent.” And that means that Europeans and the rest of the world are being deprived of many important life-enriching and life-saving AI applications that those innovators could create. Technological innovation is not a zero-sum game that only one country can “win.” Innovation drives growth and prosperity and lifts all boats as its benefits spread throughout the world. When European innovators prosper, people all over the world prosper along with them.

Is there any chance the European Commission softens its stance toward emerging technologies and looks to adopt a more flexible governance approach that instead treats AI innovators as innocent until proven guilty? I think it is extremely unlikely that will happen because, as Malenica noted, European technology policy is too rooted in fear of disruption and extreme risk-aversion. EU officials are forgetting that the most important lesson from the history of technological innovation is there can be no progress without some risk-taking and corresponding disruption. My favorite quote about the relationship between risk-taking and human progress comes from Wilbur Wright who, along with his brother, helped pioneer human flight. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” European policymakers are essentially forcing their best and brightest innovators to sit on the fence and watch the rest of the world fly right past them on the digital technology and AI front. The ramifications for the continent will be disastrous. Regardless, as I noted in concluding my recent Hill column, Europe’s approach to AI “shouldn’t be the model the U.S. follows if it hopes to maintain its early lead in AI and robotics. America should instead welcome European companies, workers and investors looking for a more hospitable place to launch bold new AI innovations.”

Alas, European officials appear ready to ignore the deleterious impact of their policies on innovation and competition and instead make regulation their leading export to the world. In fact, the European Commission will soon open a San Francisco office to work more closely with Silicon Valley companies affected by EU tech regulation. European leaders have basically surrendered on the idea of home-grown innovation and are now plowing all their energies into regulating the rest of the world’s largest digital technology companies, most of which are headquartered in the United States. It’s no wonder, then, that The Economist magazine concludes that, “Europe is the free-rider continent” that “has piggybacked on innovation from elsewhere, keeping up with rivals, not forging ahead.” Instead, “the cuddly form of capitalism embraced in Europe has markedly failed to create world-beating companies,” the magazine argues.

European officials want us to believe that they are somehow doing the world a favor by being its global tech regulator, when instead the are simply solidifying the power of the largest digital tech companies, who are the only ones with enough resources–mainly in the form of massive legal compliance teams–to live under the EU’s innovation-crushing regulations. Sadly, many US policymakers hate our own home-grown tech companies so much now, that they are willing to let this happen. In a better world, those American lawmakers would stand up to European officials looking to bully tech innovators and we would reject the innovation-killing recipe that the EU is cooking up for AI markets and expects the rest of the world to eat.


Additional Reading on AI & Robotics:

]]>
https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/feed/ 1 77016
Again, We Should Not Ban All Teens from Social Media https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/ https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/#respond Wed, 06 Jul 2022 00:16:49 +0000 https://techliberation.com/?p=77004

A growing number of conservatives are calling for Big Government censorship of social media speech platforms. Censorship proposals are to conservatives what price controls are to radical leftists: completely outlandish, unworkable, and usually unconstitutional fantasies of controlling things that are ultimately much harder to control than they realize. And the costs of even trying to impose and enforce such extremist controls are always enormous.

Earlier this year, The Wall Street Journal ran a response I wrote to a proposal set forth by columnist Peggy Noonan in which she proposed banning everyone under 18 from all social-media sites (“We Can Protect Children and Keep the Internet Free,” Apr. 15). I expanded upon that letter in an essay here entitled, “Should All Kids Under 18 Be Banned from Social Media?” National Review also recently published an article penned by Christine Rosen in which she also proposes to “Ban Kids from Social Media.” And just this week, Zach Whiting of the Texas Public Policy Foundation published an essay on “Why Texas Should Ban Social Media for Minors.”

I’ll offer a few more thoughts here in addition to what I’ve already said elsewhere. First, here is my response to the Rosen essay. National Review gave me 250 words to respond to her proposal:

While admitting that “law is a blunt instrument for solving complicated social problems,” Christine Rosen (“Keep Them Offline,” June 27) nonetheless downplays the radicalness of her proposal to make all teenagers criminals for accessing the primary media platforms of their generation. She wants us to believe that allowing teens to use social media is the equivalent of letting them operate a vehicle, smoke tobacco, or drink alcohol. This is false equivalence. Being on a social-media site is not the same as operating two tons of steel and glass at speed or using mind-altering substances. Teens certainly face challenges and risks in any new media environment, but to believe that complex social pathologies did not exist before the Internet is folly. Echoing the same “lost generation” claims made by past critics who panicked over comic books and video games, Rosen asks, “Can we afford to lose another generation of children?” and suggests that only sweeping nanny-state controls can save the day. This cycle is apparently endless: Those “lost generations” grow up fine, only to claim it’s the  next generation that is doomed! Rosen casually dismisses free-speech concerns associated with mass-media criminalization, saying that her plan “would not require censorship.” Nothing could be further from the truth. Rosen’s prohibitionist proposal would deny teens the many routine and mostly beneficial interactions they have with their peers online every day. While she belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to be a better response than the repressive regulatory regime she would have Big Government impose on society.

I have a few more things to say beyond these brief comments.

First, as I alluded to in my short response to Rosen, we’ve heard similar “lost generation” stories before. Rosen might as well be channeling the ghost of Dr. Fredric Wertham (author of Seduction of the Innocent), who in the 1950s declared comics books a public health menace and lobbied lawmakers to restrict teen access to them, insisting such comics were “the cause of a psychological mutilation of children.” The same sort of “lost generation” predictions were commonplace in countless anti-video game screeds of the 1990s. Critics were writing books with titles like Stop Teaching Our Kids to Kill and referring to video games as “murder simulators,” Ironically, just as the video game panic was heating up, juvenile crime rates were plummeting. But that didn’t stop the pundits and policymakers from suggesting that an entire generation of so-called “vidiots” were headed for disaster. (See my 2019 short history: “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics“).

It is consistently astonishing to me how, as I noted in 2012 essay, “We Always Sell the Next Generation Short.” There seems to be a never-ending cycle of generational mistrust. “There has probably never been a generation since the Paleolithic that did not deplore the fecklessness of the next and worship a golden memory of the past,” notes Matt Ridley, author of The Rational Optimist.

For example, in 1948, the poet T. S. Eliot declared: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.” We’ve heard parents (and policymakers) make similar claims about every generation since then.

What’s going on here? Why does this cycle of generational pessimism and mistrust persist? In a 1992 journal article, the late journalism professor Margaret A. Blanchard offered this explanation:

“[P]arents and grandparents who lead the efforts to cleanse today’s society seem to forget that they survived alleged attacks on their morals by different media when they were children. Each generation’s adults either lose faith in the ability of their young people to do the same or they become convinced that the dangers facing the new generation are much more substantial than the ones they faced as children.”

In a 2009 book on culture, my colleague Tyler Cowen also noted how, “Parents, who are entrusted with human lives of their own making, bring their dearest feelings, years of time, and many thousands of dollars to their childrearing efforts.” Unsurprisingly, therefore, “they will react with extreme vigor against forces that counteract such an important part of their life program.” This explains why “the very same individuals tend to adopt cultural optimism when they are young, and cultural pessimism once they have children,” Cowen says.

Building on Blanchard and Cowen’s observation, I have explained how the most simple explanation for this phenomenon is that many parents and cultural critics have passed through their “adventure window.” The willingness of humans to try new things and experiment with new forms of culture—our “adventure window”—fades rapidly after certain key points in life, as we gradually settle in our ways. As the English satirist Douglas Adams once humorously noted: “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

There is no doubt social media can create or exacerbate certain social pathologies among youth. But pro-censorship conservatives wants to take the easy way out with a Big Government media ban for the ages.

Ultimately, it’s a solution that will not be effective. Raising children and mentoring youth is certainly the hardest task we face as adults because simple solutions rarely exist to complex human challenges–and the issues kids face are often particularly hard for many parents and adults to grapple with because we often fail to fully understand both the unique issues each generation might face, and we definitely fail to fully grasp the nature of each new medium that youth embrace.  Simplistic solution–even proposals for outright bans–will not work or solve serious problems.

An outright government ban on online platforms or digital devices is likely never going to happen due to First Amendment constraints, but even ignoring the jurisprudential barriers, bans won’t work for a reason that these conservatives never bother considering: Many parents will help their kids get access to those technologies and to evade restrictions on their use. Countless parents already do so in violation of COPPA rules, and not just because they worry that their kid won’t have access to what some other kids have. Rather, many parents (like me) both wanted to make sure I could more easily communicate with them, and also ensure that they could enjoy those technologies and use them to explore the world.

These conservatives might think some parents like me are monsters for allowing my (now grown) children to get on social media when they were teens. I wasn’t blind to the challenges, but recognized that sticking one’s head in the ground or hoping for divine intervention from the Nanny State was impractical and unwise. The hardest conversations I ever had with my kids were about the ugliness they sometimes experienced online, but those conversations were also countered by the many joys that I knew online interactions brought them. Shall I tell you about everything my son learned online before 13 about building model rockets or soapbox derby cars? Or the countless sites my daughter visited gathering ideas for her arts and crafts projects when, before the age of 13, she started hand-painting and selling jean jackets (eventually prompting her to pursue an art school degree)? Again, as I noted in my National Review response, Rosen’s prohibitionist proposal would deny teens these experiences and the countless other routine and entirely beneficial interactions that they have with their peers online every day.

There is simply no substitute for talking to your kids in the most open, understanding, and loving fashion possible. My #1 priority with my own children was not foreclosing all the new digital media platforms and devices at their disposal. That was going to be almost impossible. Other approaches are needed.

Yes, of course, the world can be an ugly place. I mean, have you ever watched the nightly news on television? It’s damn ugly. Shouldn’t we block youth access to it when scenes of war and violence are shown? Newspapers are full of ugliness, too. Should a kid be allowed to see the front page of the paper when it discusses or shows the aftermath of school shootings, acts of terrorism, or even just natural disasters? I could go on, but you get the point. And you could try to claim that somehow today’s social media environment is significantly worse for kids than the mass media of old, but you cannot prove it.

Of course you’ll have anecdotes, and many of them will again point to complex social pathologies. But I have entire shelves full of books on my office wall that made similar claims about the effects of books, the telephone, radio and television, comics, cable TV, every musical medium ever, video games, and advertising efforts across all these mediums. Hundreds upon hundreds of studies were done over the past half century about the effects of depictions of violence in movies, television, and video games. And endless court battles ensued.

In the end, nothing came out of it because the literature was inconclusive and frequently contradictory. After many years of panicking about youth and media violence, in 2020, the American Psychological Association issued a new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.”  But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA now says: “Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.”

This is exactly what we should expect to find true for youth and social media. Most of the serious scholars in the field already note studies and findings about youth and social media must be carefully evaluated and that many other factors need to be considered whenever evaluating claims about complex social phenomenon.

While Rosen belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to represent the best first-order response when compared to the repressive regulatory regime she would impose on society.

Finally, I want to just reiterate what I said in my brief  National Review response about the enormous challenges associated with mass criminalization or speech platforms. Rosen seems to image that all the costs and controversies will lie on the supply-side of social media. Just call for a ban and then magically all kids disappear from social media and the big evil tech capitalists eat all the costs and hassles. Nonsense. It’s the demand-side of criminalization efforts where the most serious costs lie. What do you really think kids are going to do if Uncle Sam suddenly does ban everyone under 18 from going on a “social media site,” whatever that very broad term entails? This will become another sad chapter in the history of Big Government prohibitionist efforts that fail miserably, but not before declaring mass groups of people criminals–this time including everyone under 18–and then trying to throw the book at them when they seek to avoid those repressive controls. There are better ways to address these problems than with such extremist proposals.


Additional Reading from Adam Thierer on Media & Content Regulation :

]]>
https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/feed/ 0 77004
Podcast: Remember FAANG? https://techliberation.com/2022/05/10/podcast-remember-faang/ https://techliberation.com/2022/05/10/podcast-remember-faang/#respond Tue, 10 May 2022 15:47:16 +0000 https://techliberation.com/?p=76986

Corbin Barthold invited me on Tech Freedom’s “Tech Policy Podcast” to discuss the history of antitrust and competition policy over the past half century. We covered a huge range of cases and controversies, including: the DOJ’s mega cases against IBM & AT&T, Blockbuster and Hollywood Video’s derailed merger, the Sirius-XM deal, the hysteria over the AOL-Time Warner merger, the evolution of competition in mobile markets, and how we finally ended that dreaded old MySpace monopoly!

What does the future hold for Google, Facebook, Amazon, and Netflix? Do antitrust regulators at the DOJ or FTC have enough to mount a case against these firms? Which case is most likely to have legs?

Corbin and I also talked about the of progress more generally and the troubling rise of more and more Luddite thinking on both the left and right. I encourage you to give it a listen:

]]>
https://techliberation.com/2022/05/10/podcast-remember-faang/feed/ 0 76986
The Classical Liberal Approach to Digital Media Free Speech Issues https://techliberation.com/2021/12/08/the-classical-liberal-approach-to-digital-media-free-speech-issues/ https://techliberation.com/2021/12/08/the-classical-liberal-approach-to-digital-media-free-speech-issues/#respond Wed, 08 Dec 2021 20:41:45 +0000 https://techliberation.com/?p=76930

On December 13th, I will be participating in an Atlas Network panel on, “Big Tech, Free Speech, and Censorship: The Classical Liberal Approach.” In anticipation of that event, I have also just published a new op-ed for The Hill entitled, “Left and right take aim at Big Tech — and the First Amendment.” In this essay, I expand upon that op-ed and discuss the growing calls from both the Left and the Right for a variety of new content regulations. I then outline the classical liberal approach to concerns about free speech platforms more generally, which ultimately comes down to the proposition that innovation and competition are always superior to government regulation when it comes to content policy.

In the current debates, I am particularly concerned with calls by many conservatives for more comprehensive governmental controls on speech policies enforced by various private platforms, so I will zero in on those efforts in this essay. First, here’s what both the Left and the Right share in common in these debates: Many on both sides of the aisle desire more government control over the editorial decisions made by private platforms. They both advocate more political meddling with the way private firms make decisions about what types of content and communications are allowed on their platforms. In today’s hyper-partisan world,” I argue in my Hill column, “tech platforms have become just another plaything to be dominated by politics and regulation. When the ends justify the means, principles that transcend the battles of the day — like property rights, free speech and editorial independence — become disposable. These are things we take for granted until they’ve been chipped away at and lost.”

Despite a shared objective for greater politicization of media markets, the Left and the Right part ways quickly when it comes to the underlying objectives of expanded government control. As I noted in my Hill op-ed:

there is considerable confusion in the complaints both parties make about “Big Tech.” Democrats want tech companies doing more to limit content they claim is hate speech, misinformation, or that incites violence. Republicans want online operators to do less, because many conservatives believe tech platforms already take down too much of their content.

This makes life very lonely for free speech defenders and classical liberals. Usually in the past, we could count on the Left to be with us in some free speech battles (such as putting an end to “indecency” regulations for broadcast radio and television), while the Right would be with us on others (such as opposition to the “Fairness Doctrine,” or similar mandates). Today, however, it is more common for classical liberals to be fighting with both sides about free speech issues.

My focus is primarily on the Right because, with the rise of Donald Trump and “national conservatism,” there seems to be a lot of soul-searching going on among conservatives about their stance toward private media platforms, and the editorial rights of digital platforms in particular.

In my new  Hill essay and others articles (all of which are listed down below), I argue there is a principled classical liberal approach to these issues that was nicely outlined by President Ronald Reagan in his 1987 veto of Fairness Doctrine legislation, when he said:

History has shown that the dan­gers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and compe­tition that the First Amendment sought to guarantee.

Let’s break that line down. Reagan admits that media bias can be a real thing. Of course it is! Journalists, editors, and even the companies they work for all have specific views. They all favor or disfavor certain types of content. But, at least in the United States, the editorial decisions made by these private actors are protected by the First Amendment. Section 230 is really quite secondary to this debate, even though some Trumpian conservatives wrongly suggest that it’s the real problem here. In reality, national conservatives would need to find a way to work around well-established First Amendment protections if they wanted to impose new restrictions on the editorial rights of private parties.

But why would they want to do that? Returning to the Reagan veto statement, we should remember how he noted that, even if the First Amendment did not protect the editorial discretion of private media platforms, bureaucratic regulation was not the right answer to the problem of “bias.”  Competition and choice were the superior answer. This is the heart and soul of the classical liberal perspective: more innovation is always superior to more regulation.

For the past 30 years, conservatives and classical liberals were generally aligned on that point. But the ascendancy of Donald Trump created a rift in that alliance that now threatens to grow into a chasm as more and more Right-of-center people begin advocating for comprehensive control of media platforms.

The problems with that are numerous beginning with the fact that none of the old rationales for media controls work (and most of them never did). Consider the old arguments justifying widespread regulation of private media:

  • Scarcity” was the oldest justification for media regulation, but we live in the exact opposite world today, in which the most common complaint about media is the abundance of it!
  • Conversely, the supposed “pervasiveness” of some media (namely broadcasting) was used as a rationale for government censorship in the past. But that, too, no longer works because in today’s crowded media marketplace and Internet-enabled world, all forms of communications and entertainment are equally pervasive to some extent.
  • State ownership and licensing of spectrum was another rationale for control that no longer works. No digital media platforms need federal licenses to operate today. So, that hook is also gone. Moreover, the answer to the problem of government ownership of media is to stop letting the government own and control media assets, including spectrum.
  • “Fairness” is another old excuse for control, with some regulatory advocates suggesting that five unelected bureaucrats at the Federal Communications Commission (or some other agency) are well-suited to “balance” the airing of viewpoints on media platforms. Of course, America’s disastrous experience with the Fairness Doctrine proved just how wrong that thinking was. [I summarize all the evidence proving that here.]

That leaves a final, more amorphous rationale for media control: ” gatekeeper” concerns and assertions that private media platforms can essentially become “state actors.” In the wake of Donald Trump’s “de-platorming” from Facebook and Twitter, many of his supporters began adopting this language in defense of more aggressive government control of private media platforms, including the possibility of declaring those platforms common carriers and demanding that some sort of amorphous “neutrality” mandates be imposed on them. But as Berin Szóka and Corbin Barthold of Tech Freedom note:

Where courts have upheld imposing common carriage burdens on communications networks under the First Amendment, it has been because consumers reasonably expected them to operate conduits. Not so for social media platforms. [. . . ] When it comes to the regulation of speech on social media, however, the presumption of content neutrality does not apply. Conservatives present their criticism of content moderation as a desire for “neutrality,” but forcing platforms to carry certain content and viewpoints that they would prefer not to carry constitutes a “content preference” that would trigger strict scrutiny. Under strict scrutiny, any “gatekeeper” power exercised by social media would be just as irrelevant as the monopoly power of local newspapers was in [previous Supreme Court holdings].

Put simply, efforts to stretch extremely narrow and limited common carriage precedents to fit social media just don’t work. We’ve already seen lower courts declare that recently when blocking the enforcement of new conservative-led efforts in Florida and Texas to limit the editorial discretion of private social media platforms. If conservatives really hope to get around these legal barriers to regulation, what would be needed would be a more far-reaching strike at the First Amendment itself. That would entail a jurisprudential revolution at the Supreme Court — reversing about a century of free speech precedents — or an some sort of an effort to amend the First Amendment itself. These things are almost certainly not going to occur.

But, again, this hasn’t stopped some conservatives from pitching extreme solutions in their efforts to regulate digital media at both the state and federal level. I discuss these efforts in previous essays on, “How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality,“ “Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet,“ and “The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’.“ Perhaps some Trump-aligned conservatives understand that these legislative efforts are unlikely to work, but they continue to push them in an attempt to make life hell for tech platforms, or perhaps just to troll the Left and “own the Libs.”

On the other hand, some conservatives seem to really believe in some of the extreme ideas they are tossing around. What is particular troubling about these efforts is the way — following Trump’s lead — some conservatives, including even more mainstream conservative groups like the Heritage Foundation, are increasingly referring to private media platforms as “the enemy of the people.” That’s the kind of extremist language typically used by totalitarian thugs and Marxist lunatics who so hate private enterprise and freedom of speech that they are willing to adopt a sort of burn-the-village-to-save-it rhetorical approach to media policy.

And speaking of Marxists, here’s what is even more incredible about these efforts by some conservatives to use such rationales in support of comprehensive media regulation: It is all based on the “media access” playbook concocted by radical Leftist scholars a generation ago. As I summarized in my essay on, “The Surprising Ideological Origins of Trump’s Communications Collectivism“:

Media access advocates look to transform the First Amendment into a tool for social change to advance specific political ends or ideological objectives. Media access theory dispenses with both the editorial discretion rights and private property rights of private speech platforms. Private platforms become subject to the political whims of policymakers who dictate “fair” terms of access. We can think of this as communications collectivism.

Media access doctrine is rooted in an arrogant, elitist, anti-property, anti-freedom ethic that suggest the State is a better position to dictate what can and cannot be said on private speech platforms. “It’s astonishing, yet nonetheless true,” I continued on in that essay, “that the ideological roots of Trump’s anti-social media campaign lie in the works of those extreme Leftists and even media Marxists. He has just given media access theory his own unique nationalistic spin and sold this snake oil to conservatives.” Yet, Trump and other national conservatives are embracing this contemptible doctrine because now more than ever the ends apparently justify the means in American politics. Nevermind that all this could come back to haunt them when the Left somehow leverages this regulatory apparatus to control Fox News or other sites and content that conservatives favor! Once media platforms are viewed as just another thing to be controlled by politics, the only question is which politics and how are those politics enforced? Certainly both the Left and the Right cannot both have their way given all that current divides them.

Finally, what is utterly perplexing about all this is how much thanks national conservatives really owe to the major digital platforms they now seek to destroy. As I noted in my new Hill op-ed:

There has never been more opportunity for conservative viewpoints than right now. Each day on Facebook, the top-10 most shared links are dominated by pundits such as Ben Shapiro, Dan Bongino, Dinesh D’Souza and Sean Hannity. Right-leaning content is shared widely on Twitter each day. Websites like Dailywire.com and Foxnews.com get far more traffic than the New York Times or CNN.

Thus, conservatives might be shooting themselves in the foot if they were able to convince more legislatures to adopt the media access regulatory playbook because it could have profound unintended consequences once the Left uses those tools to somehow restrict access to “hate speech” or “misinformation” — and then define it so broadly so as to include much of the top material posted by conservatives on Facebook and Twitter ever day.

Not all conservatives have drank the media access kool-aid. In the wake of Trump’s deplatforming from a few major sites, a wave of new Right-leaning digital services are being planned or have already launched. (Axios and Forbes recently summarized some of these efforts.) I don’t know which will of these efforts will succeed, but more competition and platform-building are certainly superior to current calls by some Trump supporters for government regulation of mainstream social media services.

Again, this is the old Reagan vision at its finest! We can achieve a better media landscape, “only through the freedom and compe­tition that the First Amendment sought to guarantee,” not through bureaucratic regulation. It remains the principled path forward.


Additional Reading :

Older essays & testimony :

]]>
https://techliberation.com/2021/12/08/the-classical-liberal-approach-to-digital-media-free-speech-issues/feed/ 0 76930
Conservatives & Common Carriage: Contradictions & Challenges https://techliberation.com/2021/04/17/conservatives-common-carriage-contradictions-challenges/ https://techliberation.com/2021/04/17/conservatives-common-carriage-contradictions-challenges/#respond Sat, 17 Apr 2021 14:34:48 +0000 https://techliberation.com/?p=76871

Over at Discourse magazine I’ve posted my latest essay on how conservatives are increasingly flirting with the idea of greatly expanding regulatory control of private speech platforms via some sort of common carriage regulation or new Fairness Doctrine for the internet. It begins:

Conservatives have traditionally viewed the administrative state with suspicion and worried about their values and policy prescriptions getting a fair shake within regulatory bureaucracies. This makes their newfound embrace of common carriage regulation and media access theory (i.e., the notion that government should act to force access to private media platforms because they provide an essential public service) somewhat confusing. Recent opinions from Supreme Court Justice Clarence Thomas as well as various comments and proposals of Sen. Josh Hawley and former President Trump signal a remarkable openness to greater administrative control of private speech platforms. Given the takedown actions some large tech companies have employed recently against some conservative leaders and viewpoints, the frustration of many on the right is understandable. But why would conservatives think they are going to get a better shake from state-regulated monopolists than they would from today’s constellation of players or, more importantly, from a future market with other players and platforms?

I continue on to explain why conservatives should be skeptical of the administrative state being their friend when it comes to the control of free speech. I end by reminding conservatives what President Ronald Reagan said in his 1987 veto of legislation to reestablish the Fairness Doctrine: “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.”

Read more at Discourse, and down below you will find several other recent essays I’ve written on the topic.

]]>
https://techliberation.com/2021/04/17/conservatives-common-carriage-contradictions-challenges/feed/ 0 76871
A Return of the Trustbusters Could Harm Consumers https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/ https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/#respond Tue, 13 Apr 2021 17:01:58 +0000 https://techliberation.com/?p=76868

Is it a time for the return of the trustbusters? Some politicians seem to imply that today’s tech giants have become modern day robber barons taking advantage of the American consumer and, as a result, they argue that it is time for a return of aggressive antitrust enforcement and for dramatic changes to existing antitrust interpretations to address the concerns associated with today’s big business.

This criticism is not limited to one side of the aisle, with Senators Amy Klobuchar (D-MN) and Josh Hawley (R-MO) both proposing their own dramatic overhauls of antitrust laws and the House Judiciary Committee majority issuing a report that greatly criticizes the current technology market. In both cases these new proposals create presumptive bans on mergers for companies of certain size and lower the burdens on the government for intervening and proving its case. I have previously analyzed the potential impact of Senator Klobuchar’s proposal, and Senator Hawley’s proposal raises many similar concerns when it comes to its merger ban and shift away from existing objective standards.

Proponents on both sides of the aisle argue changing current antitrust standards is needed to fight big business, but sadly these modern-day trustbusters may not be the heroes they see themselves as. In fact, such a shift would harm American consumers and small businesses well beyond the tech sector.

The Trustbusters-Era Standards Would Fail Consumers

The original trustbusters of the late 19th and early 20th century created a system that was not always clear and could be abused by regulators subjectively determining what was and was not anti-competitive behavior. The result was that, in this earlier era, businesses and consumers could never be certain what behaviors would be considered violations.

The shift to the consumer welfare standard helped fixed that problem by providing an objective framework using economic analysis to weigh the risk and benefits of behavior and judging it based on its impact on consumers and not specific competitors. Unfortunately, these new proposals would shift away from this objective focus and return to a presumption that big is bad. This shift would be bad news not only for big business but for smaller businesses and consumers as well. Small businesses would lose an important exit strategy option with the presumptive ban on mergers with large companies, and consumers would miss out on benefits such as price reductions, improvements, and innovations that these mergers could bring.

While much of the debate around antitrust changes focuses on large tech firms such as Google, Apple, Facebook, and Amazon, changing antitrust laws would impact far more of the economy than just tech. Both the Hawley and Klobuchar proposals would bar mergers unless there is strong evidence proving their value (a “regulatory presumption” against mergers), but this presumption would impact industries such as pharmaceuticals, finance, and agriculture that also frequently have mergers and acquisitions that benefit consumers by helping to expand the distribution of a product or improve on an existing service. In fact, companies including L’Oreal and Nike could find any mergers or acquisitions presumptively prohibited under the limits in these proposals.

Existing Standards Can Adapt to Dynamic Markets Like Tech

Existing standards are still able to address the concerns associated with this dynamic and changing markets as well as more established markets. For example, the Antitrust Modernization Commission concluded, “There is no need to revise the antitrust laws to apply different rules to industries in which innovation, intellectual property, and technological change are central features.”

Sometimes regulators’ sense of the market in technology may prove to be wrong by the evolution of a technology or the disruption caused by a dramatic shift in the industry. For example, debates used to be focused on MySpace and AOL , which have now become things of internet nostalgia. Today’s tech giants are facing growing challenges not only from each other in many cases, but also from many newer entrants, from ClubHouse and TikTok to Zoom and Shopify. Removing the need to firmly establish the existing standards of an antitrust case would risk unnecessary intervention into the market or, more likely, could prevent actions that benefit consumers.

Some question whether this economic analysis-based standard can handle the zero-price services offered by many technology companies. While price is often the easiest focus, this standard also considers issues such as quality and innovation, making it elastic enough to address potential concerns even if the price is zero. Still, this does not mean that the definition of harm under the consumer welfare standard should be expanded to address any litany of concerns that cannot be objectively shown to have market harm.

Trustbusters’ Concerns with Tech Are Unlikely to Be Solved by Antitrust

Antitrust is also a poor tool to address concerns such as data privacy or content moderation, and using it to do so could allow for future abuse for other political ends. There is no guarantee that smaller companies would respond to existing market demands around issues such as content moderation any differently than the current large players. Additionally, when it comes to privacy and targeted advertising, smaller platforms would have to find new ways to gain revenue and might be forced to monetize the platform more to stay afloat without being able to rely on the revenue from a larger parent company. Finally, there is no guarantee that these smaller companies would be more innovative or dynamic particularly as existing teams and talents are divided by break ups and walls are erected to prevent entry into certain markets.

The good news is some policymakers have realized that these problems exist and argued for preserving the existing framework and addressing these other concerns with appropriately targeted policies. For example, Sen. Mike Lee recently defended the consumer welfare standard and was critical of the negative impact “radically alter[ing] our antitrust regime” could have while still questioning some recent decisions around content moderation.

Conclusion

Many have hoped for a return of bipartisan cooperation in Washington, but unfortunately bad ideas can also emerge on both sides of the aisle. Shifting away from the consumer welfare standard would ultimately harm consumers at a time when innovation and economic recovery are especially critical.

]]>
https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/feed/ 0 76868
Another NFT Explainer https://techliberation.com/2021/03/29/another-nft-explainer/ https://techliberation.com/2021/03/29/another-nft-explainer/#comments Mon, 29 Mar 2021 14:55:31 +0000 https://techliberation.com/?p=76855 Post image for Another NFT Explainer

I don’t understand the hype surrounding Non-Fungible Tokens (NFTs). As someone who has studied copyright and technology issues for years, maybe because it doesn’t seem very new to me. It’s just a remixing of some ideas and technologies that have been around for decades. Let me explain.

For at least 100 years, “ownership” of real property has been thought of as a “bundle of rights.” As a simple example, you may “own” the land your house sits on, but the city probably has a right to build and maintain a sidewalk across your yard and the general public has a right to walk across your property on that sidewalk. The gas company has the right to walk into your side yard to read your gas meter. Pilots have a right to fly over your house. Some other company or companies may have rights to any water and minerals in the ground below your house. Your homeowners association may even have a right to dictate what color you paint the exterior of your house.

This same “bundle of rights” concept also applies to copyright. Unless explicitly granted by contract, buying an original painting doesn’t mean you have the right to take a photograph of the painting and sell prints of the photograph. If you buy a DVD, you have the right to watch the DVD privately and you have the right to sell the DVD when you’re no longer interested in it. (That second right is called the “first sale doctrine” and there have been numerous Supreme Court cases and laws defining it’s exact boundaries.) But unless explicitly granted by contract, purchasing a DVD doesn’t mean you have the right to set up a projector and big screen and charge members of the public to watch it. That requires a “public performance” right.

When you buy most NFTs, you get very few of the rights that typically come with ownership. You might only get the right to privately exhibit the underlying work. And if you decide to later resell the NFT, the contract (which is embedded in digital code of the NFT) may stipulate that the original artist gets a 10% royalty on every future sale of the work.

The second thing you need to understand is the concept of “artificial scarcity.” As a simple example, in the art world, it’s common for photographers and painters to sell numbered, “limited edition” prints of their works. There’s no technological reason why they couldn’t print 1,000 copies of their work, or even register the print with a “print on demand” service that will continue making and selling prints as long as there are people who want to buy them. But limiting the number of prints made (even if each print is identical to any other print), is likely to raise the price. This is artificial scarcity. Most NFTs are an edition of one. Even if there are other exact copies of the underlying artwork sold as NFTs, each NFT is unique. This is like an artist selling numbered prints but not putting a limit on how many numbered prints they make. Each numbered print is technically unique because each has a different number. But without some artificial scarcity, the value of any one print may stay very low.

So if buying a NFT doesn’t get you any real rights and the scarcity is purely artificial, why are NFTs selling for hundreds of thousands of dollars? Here’s where all the technology really makes a difference. If you spend millions on a Picasso painting, you’re taking a lot of risks. First, you’re taking the risk that it’s a forgery, which would drop the value to near-zero. Second is the risk that the painting will be stolen from you. Insurance can help deal with both problems, but that adds more complications. If you’re buying the painting as an investment, these complications reduce the “liquidity” of the asset. Liquidity is the ease with which an asset can be converted into cash without affecting the market value of the asset. Put more simply, liquidity is how easily the asset can be sold. Cash has long been considered the most liquid asset, but NFTs are arguably much /more/ liquid than cash. NFTs don’t require anything physical to trade hands. And even electronic currency transfers take time and are subject to government oversight. NFTs are so new, they’re barely regulated. But by using blockchain technology, they can be easily and safely bought and sold anonymously. NFTs are a money launderers dream. It’s unclear if NFTs are actually being used to launder money, but it’s a concern.

The other reason I think NFTs are so popular is speculation. Because NFTs are so liquid and because there basically doesn’t even need to be an underlying work, the initial cost to “mint” (create) a NFT is near zero. And by using blockchain systems, NFTs can be resold with little overhead. (Though they can also be configured to ensure a certain overhead, e.g. that 10% of every resale goes to the original artist.) These characteristics, along with the newness of NFTs make it a popular marketplace for speculators, people who purchase assets with the intent of holding them for only a short time and then selling them for a profit.

NFTs started to enter the public consciousness in February 2021, after the 10-year old “Nyan cat” animation sold for over half a million dollars. This is also just a few weeks after the Gamestop stock short squeeze made a compelling case that average investors, working in concert, could upset the stock market and make millions. So it’s no wonder that there is rampant speculation in NFTs.

In conclusion, NFTs will be a tremendous benefit to digital artists, who did not previously have a way to easily prove the authenticity of their works (which is of tremendous importance to investors) or to provide a digital equivalent to numbered prints in the physical art world. But the hype about NFTs is just that. It’s driven by speculators and you’d be crazy to think of this as a worthy investment opportunity.

]]>
https://techliberation.com/2021/03/29/another-nft-explainer/feed/ 1 76855
Thoughts on Content Moderation Online https://techliberation.com/2021/03/25/thoughts-on-content-moderation-online/ https://techliberation.com/2021/03/25/thoughts-on-content-moderation-online/#respond Thu, 25 Mar 2021 14:23:57 +0000 https://techliberation.com/?p=76839

Content moderation online is a newsworthy and heated political topic. In the past year, social media companies and Internet infrastructure companies have gotten much more aggressive about banning and suspending users and organizations from their platforms. Today, Congress is holding another hearing for tech CEOs to explain and defend their content moderation standards. Relatedly, Ben Thompson at Stratechery recently had interesting interviews with Patrick Collison (Stripe), Brad Smith (Microsoft), Thomas Kurian (Google Cloud), and Matthew Prince (Cloudflare) about the difficult road ahead re: content moderation by Internet infrastructure companies.

I’m unconvinced of the need to rewrite Section 230 but like the rest of the Telecom Act—which turned 25 last month–the law is showing its age. There are legal questions about Internet content moderation that would benefit from clarifications from courts or legal scholars.

(One note: Social media common carriage, which some advocates on the left, right, and center have proposed, won’t work well, largely for the same reason ISP common carriage won’t work well—heterogeneous customer demands and a complex technical interface to regulate—a topic for another essay.)

The recent increase in content moderation and user bans raises questions–for lawmakers in both parties–about how these practices interact with existing federal laws and court precedents. Some legal issues that need industry, scholar, and court attention:

Public Officials’ Social Media and Designated Public Forums

Does Knight Institute v. Trump prevent social media companies’ censorship on public officials’ social media pages?

The 2nd Circuit, in Knight Institute v. Trump, deemed the “interactive space” beneath Pres. Trump’s tweets a “designated public forum,” which meant that “he may not selectively exclude those whose views he disagrees with.” For the 2nd Circuit and any courts that follow that decision, the “interactive space” of most public officials’ Facebook pages, Twitter feeds, and YouTube pages seem to be designated public forums.

I read the Knight Institute decision when it came out and I couldn’t shake the feeling that the decision had some unsettling implications. The reason the decision seems amiss struck me recently:

Can it be lawful for a private party (Twitter, Facebook, etc.) to censor members of the public who are using a designated public forum (like replying to President Trump’s tweets)? 

That can’t be right. We have designated public forums in the physical world, like when a city council rents out a church auditorium or Lions Club hall for a public meeting. All speech in a designated public forum is accorded the strong First Amendment rights found in traditional public forums. I’m unaware of a case on the subject but a court is unlikely to allow the private owner of a designated public forum, like a church, to censor or dictate who can speak when its facilities are used as a designated public forum.

The straightforward implication from Knight Institute v. Trump seems to be that neither politicians nor social media companies can make viewpoint-based decisions about who can comment on or access an official’s social media account.

Knight Institute creates more First Amendment problems than it solves, and could be reversed someday. [Ed. update: In April 2021, the Supreme Court vacated the 2nd Circuit decision as moot since Trump is no longer president. However, a federal district court in Florida concluded, in Attwood v. Clemons, that public officials’ “social media accounts are designated public forums.” The Knight Institute has likewise sued Texas Attorney General Paxton for blocking user and claimed that his social media feed is a designated public forum. It’s clear more courts will adopt this rule.] But to the extent Knight Institute v. Trump is good law, it seems to limit how social media companies moderate public officials’ pages and feeds.

Cloud neutrality

How should tech companies, lawmakers, and courts interpret Sec. 512?

Wired recently published a piece about “cloud neutrality,” which draws on net neutrality norms of nondiscrimination towards content and applies them to Internet infrastructure companies. I’m skeptical of the need or constitutionality of the idea but, arguably, the US has a soft version of cloud neutrality embedded in Section 512 of the DMCA.

The law conditions the copyright liability safe harbor for Internet infrastructure companies only if: 

the transmission, routing, provision of connections, or storage is carried out through an automatic technical process without selection of the material by the service provider.

17 USC § 512(a).

Perhaps a copyright lawyer can clarify, but it appears that Internet infrastructure companies may lose their copyright safe harbor if they handpick material to censor. To my knowledge, there is no scholarship or court decision on this question.

State Action

What evidence would a user-plaintiff need to show that their account or content was removed due to state action?

Most complaints of state action for social media companies’ content moderation are dubious. And while showing state action is hard to prove, in narrow circumstances it may apply. The Supreme Court test has said that when there is a “sufficiently close nexus between the State and [a] challenged action,” the action of a private company will be treated as state action. For that reason, content removals made after non-public pressure or demands from federal and state officials to social media moderators likely aren’t protected by the First Amendment or Section 230.

Most examples of federal and state officials privately jawboning social media companies will never see the light of day. However, it probably occurs. Based on Politico reporting, for instance, it appears that state officials in a few states leaned on social media companies to remove anti-lockdown protest events last April. It’s hard to know exactly what occurred in those private conversations, and Politico has updated the story a few times, but examples like that may qualify as state action.

Any public official who engages in non-public jawboning resulting in content moderation could also be liable to a Section 1983 claim–civil liability for deprivation of an affected user’s constitutional rights.

Finally, what should Congress do about foreign state action that results in tech censorship in the US? A major theme of the Stretechery interviews ist that many tech companies feel pressure to set their moderation standards based on what foreign governments censor and prohibit. Content removal from online services because of foreign influence isn’t a First Amendment problem, but it is a serious free speech problem for Americans.

Many Republicans and Democrats want to punish large tech companies for real or perceived unfairness in content moderation. That’s politics, I suppose, but it’s a damaging instinct. For one thing, the Section 230 fixation distract free-market and free-speech advocates from, among other things, alarming proposals for changes to the FEC that empower it to criminalize more political speech. The singular focus on Section 230 repeal-reform distracts from these other legal questions about content moderation. Hopefully the Biden DOJ or congressional hearings will take some of these up.

]]>
https://techliberation.com/2021/03/25/thoughts-on-content-moderation-online/feed/ 0 76839
European Industrial Policy Follies https://techliberation.com/2021/02/15/european-industrial-policy-follies/ https://techliberation.com/2021/02/15/european-industrial-policy-follies/#comments Mon, 15 Feb 2021 16:17:36 +0000 https://techliberation.com/?p=76842

Over at Discourse magazine, Connor Haaland and I have an new essay (“Can European-Style Industrial Policies Create Tech Supremacy?”) examining Europe’s effort to develop national champion in a variety of tech sectors using highly targeted industrial policy efforts. The results have not been encouraging, we find.

Thus far, however, the Europeans don’t have much to show for their attempts to produce home-grown tech champions. Despite highly targeted and expensive efforts to foster a domestic tech base, the EU has instead generated a string of industrial policy failures that should serve as a cautionary tale for U.S. pundits and policymakers, who seem increasingly open to more government-steered innovation efforts.

We examine case studies in internet access, search, GPS, video services, and the sharing economy. We then explore newly-proposed industrial policy efforts aimed at developing their domestic AI market. We note how:

no amount of centralized state planning or spending will be able to overcome Europe’s aversion to technological risk-taking and disruption. The EU’s innovation culture generally values stability—of existing laws, institutions and businesses—over disruptive technological change. […] There are no European versions of Microsoft, Google or Apple, even though Europeans obviously demand and consume the sort of products and services those U.S.-based companies provide. It’s simply not possible given the EU’s current regulatory regime.

It seems unlikely that Europe will have much better luck developing home-grown champions in AI and robotics using this same playbook. “American academics and policymakers with an affinity for industrial policy might want to consider a model other than Europe’s misguided combination of fruitless state planning and heavy-handed regulatory edicts,” we conclude.

Head over to Discourse  to read the entire essay.

]]>
https://techliberation.com/2021/02/15/european-industrial-policy-follies/feed/ 1 76842
The End of Permissionless Innovation? https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/ https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/#comments Sun, 10 Jan 2021 21:24:12 +0000 https://techliberation.com/?p=76823

Time magazine recently declared 2020 “The Worst Year Ever.” By historical standards that may be a bit of hyperbole. For America’s digital technology sector, however, that headline rings true. After a remarkable 25-year run that saw an explosion of innovation and the rapid ascent of a group of U.S. companies that became household names across the globe, politicians and pundits in 2020 declared the party over. “We now are on the cusp of a new era of tech policy, one in which the policy catches up with the technology,” says Darrell M. West of the Brookings Institution in a recent essay, “The End of Permissionless Innovation.” West cites the House Judiciary Antitrust Subcommittee’s October report on competition in digital markets—where it equates large tech firms with the “oil barons and railroad tycoons” of the Gilded Age—as the clearest sign that politicization of the internet and digital technology is accelerating. It is hardly the only indication that America is set to abandon permissionless innovation and revisit the era of heavy-handed regulation for information and communication technology (ICT) markets. Equally significant is the growing bipartisan crusade against Section 230, the provision of the 1996 Telecommunications Act that shields “interactive computer services” from liability for information posted or published on their systems by users. No single policy has been more important to the flourishing of online speech or commerce than Sec. 230 because, without it, online platforms would be overwhelmed by regulation and lawsuits. But now, long knives are coming out for the law, with plenty of politicians and academics calling for it to be gutted. Calls to reform or repeal Sec. 230 were once exclusively the province of left-leaning academics or policymakers, but this year it was conservatives in the White Houseon Capitol Hill and at the Federal Communications Commission (FCC) who became the leading cheerleaders for scaling back or eliminating the law. President Trump railed against Sec. 230 repeatedly on Twitter, and most recently vetoed the annual National Defense Authorization Act in part because Congress did not include a repeal of the law in the measure. Meanwhile, conservative lawmakers in Congress such as Sens. Josh Hawley and Ted Cruz have used subpoenasangry letters and heated hearings to hammer digital tech executives about their content moderation practices. Allegations of anti-conservative bias have motivated many of these efforts. Even Supreme Court Justice Clarence Thomas questioned the law in a recent opinion. Other proposed regulatory interventions include calls for new national privacy laws, an “Algorithmic Accountability Act” to regulate artificial intelligence technologies, and a growing variety of industrial policy measures that would open the door to widespread meddling with various tech sectors. Some officials in the Trump administration even pushed for a nationalized 5G communications network in the name of competing with China. This growing “techlash” signals a bipartisan “Back to the Future” moment, with the possibility of the U.S. reviving a regulatory playbook that many believed had been discarded in history’s dustbin. Although plenty of politicians and pundits are taking victory laps and giving each other high-fives over the impending end of the permissionless innovation era, it is worth considering what America will be losing if we once again apply old top-down, permission slip-oriented policies to the technology sector.

Permissionless Innovation: The Basics

As an engineering principle, permissionless innovation represents the general freedom to tinker and develop new ideas and products in a relatively unconstrained fashion. As I noted in a recent book on the topic, permissionless innovation can also describe a governance disposition or regulatory default toward entrepreneurial activities. In this sense, permissionless innovation refers to the idea that experimentation with new technologies and innovations should generally be permitted by default and that prior restraints on creative activities should be avoided except in those cases where clear and immediate harm is evident. There is an obvious relationship between the narrow and broad definitions of permissionless innovation. When governments lean toward permissionless innovation as a policy default, it is likely to encourage freewheeling experimentation more generally. But permissionless innovation can sometimes occur in the wild, even when public policy instead tends toward its antithesis—the precautionary principle. As I noted in my latest book, tinkerers and innovators sometimes behave evasively and act to make permissionless innovation a reality even when public policy discourages it through precautionary restraints. To be clear, permissionless innovation as a policy default has not meant anarchy. Quite the opposite, in fact. In the United States, over the past 25 years, no major federal agencies that regulate technology or laws that do so were eliminated. Indeed, most agencies grew bigger. But in spite of this, entrepreneurs during this period got more green lights than red ones, and innovation was treated as innocent until proven guilty. This is how and why social media and the sharing economy developed and prospered here and not in other countries, where layers of permission slips prevented such innovations from ever getting off the drawing board. The question now is, how will the shift to end permissionless innovation as a policy default in the U.S. affect innovative activity here more generally? Economic historians Deirdre McCloskey and Joel Mokyr teach us that societal and political attitudes toward growth, risk-taking and entrepreneurialism have a powerful connection with the competitive standing of nations and the possibility of long-term prosperity. If America’s innovation culture sours on the idea of permissionless-ness and moves toward a precautionary principle-based model, creative minds will find it harder to experiment with bold new ideas that could help enrich the nation and improve the well-being of the citizenry—which is exactly why America discarded its old top-down regulatory model in the first place.

Why America Junked the Old Model

Perhaps the easiest way to put some rough bookends on the beginning and end of America’s permissionless innovation era is to date it to the birth and impending death of Sec. 230 itself. The enactment in 1996 of the Telecommunications Act was important, not only because it included Sec. 230, but also because the law created a sort of policy firewall between the old and new worlds of ICT regulation. The old ICT regime was rooted in a complex maze of federal, state and local regulatory permission slips. If you wanted to do anything truly innovative in the old days, you typically needed to get some regulator’s blessing first—sometimes multiple blessings. The exception was the print sector, which enjoyed robust First Amendment protection from the time of the nation’s founding. Newspapers, magazines and book publishers were left largely free of prior restraints regarding what they published or how they innovated. The electronic media of the 20th century were not so lucky. Telephony, radio, television, cable, satellite and other technologies were quickly encumbered with a crazy quilt of federal and state regulations. Those restraints include price controls, entry restrictions, speech restrictions and endless agency threats. ICT policy started turning the corner in the late 1980s after the old regulatory model failed to achieve its mission of more choice, higher quality and lower prices for media and communications. Almost everyone accepted that change was needed, and it came fast. The 1990s became a whirlwind of policy and technological change. In the mid-1990s, the Clinton administration decided to allow open commercialization of the internet, which, until then, had mostly been a plaything for government agencies and university researchers. But it was the enactment of the 1996 telecommunications law that sealed the deal. Not only did the new law largely avoid regulating the internet like analog-era ICT, but, more importantly, it included Sec. 230, which helped ensure that future regulators or overzealous tort lawyers would not undermine this wonderful new resource. A year later, the Clinton administration put a cherry on top with the release of its Framework for Global Electronic Commerce. This bold policy statement announced a clean break from the past, arguing that “the private sector should lead [and] the internet should develop as a market-driven arena, not a regulated industry.” Permissionless innovation had become the foundation of American tech policy.

The Results

Ideas have consequences, as they say, and that includes ramifications for domestic business formation and global competitiveness. While the U.S. was allowing the private sector to largely determine the shape of the internet, Europe was embarking on a very different policy path, one that would hobble its tech sector. America’s more flexible policy ecosystem proved to be fertile ground for digital startups. Consider the rise of “unicorns,” shorthand for companies valued at $1+ billion. “In terms of the global distribution of startup success,” notes the State of the Venture Capital Industry in 2019, “the number of private unicorns has grown from an initial list of 82 in 2015 to 356 in Q2 2019,” and fully half of them are U.S.-based. The United States is also home to the most innovative tech firms. Over the past decade, Strategy& (PricewaterhouseCooper’s strategy consulting business) has compiled a list of the world’s most innovative companies, based on R&D efforts and revenue. Each year that list is dominated by American tech companies. In 2013, 9 of the top 10 most innovative companies were based in the U.S., and most of them were involved in computing, software and digital technology. Global competition is intensifying, but in the most recent 2018 list, 15 of the top 25 companies are still U.S.-based giants, with Amazon, Google, Intel, Microsoft, Apple, Facebook, Oracle and Cisco leading the way. Meanwhile, European digital tech companies cannot be found on any such list. While America’s tech companies are household names across the European continent, most people struggle to name a single digital innovator headquartered in the EU. Permissionless innovation crushed the precautionary principle in the trans-Atlantic policy wars. European policymakers have responded to the continent’s digital stagnation by doubling down on their aggressive regulatory efforts. The EU closed out 2020 with two comprehensive new measures (the Digital Services Act and the Digital Markets Act), while the U.K. simultaneously pursued a new “online harms” law. Taken together, these proposals represent “the biggest potential expansion of global tech regulation in years,” according to The Wall Street Journal. The measures will greatly expand extraterritorial control over American tech companies. Having decimated their domestic technology base and driven away innovators and investors, EU officials are now resorting to plugging budget shortfalls with future antitrust fines on U.S.-based tech companies. It has essentially been a lost quarter century for Europe on the information technology front, and now American companies are expected to pay for it.

Republicans Revive ‘Regulation-By-Raised-Eyebrow’

In light of the failure of Europe’s precautionary principle-based policy paradigm, and considering the threat now posed by the growing importance of various Chinese tech companies, one might think U.S. policymakers would be celebrating the competitive advantages created by a quarter century of American tech dominance and contemplating how to apply this winning vision to other sectors of the economy. Alas, despite its amazing run, business and political leaders are now turning against permissionless innovation as America’s policy lodestar. What is most surprising is how this reversal is now being championed by conservative Republicans, who traditionally support deregulation. President Trump also called for tightening the screws on Big Tech. For example, in a May 2020 Executive Order on “Preventing Online Censorship,” he accused online platforms of “selective censorship that is harming our national discourse” and suggested that “these platforms function in many ways as a 21st century equivalent of the public square.” Trump and his supporters put Google, Facebook, Twitter and Amazon in their crosshairs, accusing them of discriminating against conservative viewpoints or values. The irony here is that no politician owes more to modern social media platforms than Donald Trump, who effectively used them to communicate his ideas directly to the American people. Moreover, conservative pundits now enjoy unparalleled opportunity to get their views out to the wider world thanks to all the digital soapboxes they now can stand on. YouTube and Twitter are chock-full of conservative punditry, and the daily list of top 10 search terms on Facebook is dominated consistently by conservative voices, where “the right wing has a massive advantage,” according to Politico. Nonetheless, conservatives insist they still don’t get a fair shake from the cornucopia of new communications platforms that earlier generations of conservatives could have only dreamed about having at their disposal. They think the deck is stacked against them by Silicon Valley liberals. This growing backlash culminated in a remarkable Senate Commerce Committee hearing on Oct. 28 in which congressional Republicans hounded tech CEOs and called for more favorable treatment of conservatives, and threatened social media companies with regulation if conservative content was taken down. Liberal lawmakers, by contrast, uniformly demanded the companies do more to remove content they felt was harmful or deceptive in some fashion. In many cases, lawmakers on both sides of the aisle were talking about the exact same content, putting the companies in the impossible position of having to devise a Goldilocks formula to get the content balance just right, even though it would be impossible to make both sides happy. In the broadcast era, this sort of political harassment was known as the “regulation-by-raised-eyebrow” approach, which allowed officials to get around First Amendment limitations on government content control. Congressional lawmakers and regulators at the FCC would set up show trial hearings and use political intimidation to gain programming concessions from licensed radio and television operators. These shakedown tactics didn’t always work, but they often resulted in forms of soft censorship, with media outlets editing content to make politicians happy. The same dynamic is at work today. Thus, when a firebrand politician like Sen. Josh Hawley suggests “we’d be better off if Facebook disappeared,” or when Sohrab Ahmari, the conservative op-ed editor at the New York Postcalls for the nationalization of Twitter, they likely understand these extreme proposals won’t happen. But such jawboning represents an easy way to whip up your base while also indirectly putting intense pressure on companies to tweak their policies. Make us happy, or else! It is not always clear what that “or else” entails, but the accumulated threats probably have some effect on content decisions made by these firms. Whether all this means that Sec. 230 gets scrapped or not shouldn’t distract from the more pertinent fact: few on the political right are preaching the gospel of permissionless innovation anymore. Even tech companies and Silicon Valley-backed organizations now actively distance themselves from the term. Zachary Graves, head of policy at Lincoln Network, a tech advocacy organization, worries that permissionless innovation is little more than a “legitimizing facade for anarcho-capitalists, tech bros, and cynical corporate flacks.” He lines up with the growing cast of commentators on both the left and right who endorse a “Tech New Deal” without getting concrete about what that means in practice. What it likely means is a return to a well-worn regulatory playbook of the past that resulted in innovation stagnation and crony capitalism.

A More Political Future

Indeed, as was the case during past eras of permission slip-based policy, our new regulatory era will be a great boon to the largest tech companies. Many people advocate greater regulation in the name of promoting competition, choice, quality and lower prices. But merely because someone proclaims that they are looking to serve the public interest doesn’t mean the regulatory policies they implement will achieve those well-intentioned goals. The means to the end—new rules, regulations and bureaucracies—are messy, imprecise and often counterproductive. Fifty years ago, the Nobel prize-winning economist George Stigler taught us that, “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefits.” In other words, new regulations often help to entrench existing players rather than fostering greater competition. Countless experts since then have documented the problem of regulatory capture in various contexts. If the past is prologue, we can expect many large tech firms to openly embrace regulation as they come to see it as a useful way of preserving market share and fending off pesky new rivals, most of whom will not be able to shoulder the compliance burdens and liability threats associated with permission slip-based regulatory regimes. True to form, in recent congressional hearings, Facebook head Mark Zuckerberg called on lawmakers to begin regulating social media markets. The company then rolled out a slick new website and advertising campaign inviting new rules on various matters. It is always easy for the king of the hill to call for more regulation when that hill is a mound of red tape of their own making—and which few others can ascend. It is a lesson we should have learned in the AT&T era, when a decidedly unnatural monopoly was formed through a partnership between company officials and the government.

Image Credit: Infrogmation/Wikimedia Commons

Many independent telephone companies existed across America before AT&T’s leaders cut sweetheart deals with policymakers that tilted the playing field in its favor and undermined competition. With rivals hobbled by entry restrictions and other rules, Ma Bell went on to enjoy more than a half century of stable market share and guaranteed rates of return. Consumers, by contrast, were expected to be content with plain-vanilla telephone services that barely changed. Some of us are old enough to remember when the biggest “innovation” in telephony involved the move from rotary-dial phones to the push-button Princess phone, which, we were thrilled to discover, came in multiple colors and had a longer cord. In a similar way, the impending close of the permissionless innovation era signals the twilight of technological creative destruction and its replacement by a new regime of political favor-seeking and logrolling, which could lead to innovation stagnation. The CEOs of the remaining large tech companies will be expected to make regular visits to the halls of Congress and regulatory agencies (and to all those fundraising parties, too) to get their marching orders, just as large telecom and broadcaster players did in the past. We will revert to the old historical trajectory, which saw communications and media companies securing marketplace advantages more through political machinations than marketplace merit.

Will Politics Really Catch Up?

While permissionless innovation may be falling out of favor with elites, America’s entrepreneurial spirit will be hard to snuff out, even when layers of red tape make it riskier to be creative. If for no other reason, permissionless innovation still has a fighting chance so long as Congress struggles to enact comprehensive technology measures. General legislative dysfunction and profound technological ignorance are two reasons that Congress has largely become a non-actor on tech policy in recent years. But the primary limitation on legislative meddling is the so-called pacing problem, which refers to the way technological innovation often outpaces the ability of laws and regulations to keep up. “I have said more than once that innovation moves at the speed of imagination and that government has traditionally moved at, well, the speed of government,” observed former Federal Aviation Administration head Michael Huerta in a 2016 speech.

DNA sequencing machine. Image Credit: Assembly/Getty Images

The same factors that drove the rise of the internet revolution—digitization, miniaturization, ubiquitous mobile connectivity and constantly increasing processing power—are spreading to many other sectors and challenging precautionary policies in the process. For example, just as “Moore’s Law” relentlessly powers the pace of change in ICT sectors, the “Carlson curve” now fuels genetic innovation. The curve refers to the fact that, over the past two decades, the cost of sequencing a human genome has plummeted from over $100 million to under $1,000, a rate nearly three times faster than Moore’s Law. Speed isn’t the only factor driving the pacing problem. Policymakers also struggle with metaphysical considerations about how to define the things they seek to regulate. It used to be easy to agree what a phone, television or medical tracking device was for regulatory purposes. But what do those terms really mean in the age of the smartphone, which incorporates all of them and much more? “‘Tech’ is a very diverse, widely-spread industry that touches on all sorts of different issues,” notes tech analyst Benedict Evans. “These issues generally need detailed analysis to understand, and they tend to change in months, not decades.” This makes regulating the industry significantly more challenging than it was in the past. It doesn’t mean the end of regulation—especially for sectors already encumbered by many layers of preexisting rules. But these new realities lead to a more interesting game of regulatory whack-a-mole: pushing down technological innovation in one way often means it simply pops up somewhere else. The continued rapid growth of what some call “the new technologies of freedom”—artificial intelligence, blockchain, the Internet of Things, etc.—should give us some reasons for optimism. It’s hard to put these genies back in their bottles now that they’re out. This is even more true thanks to the growth of innovation arbitrage—both globally and domestically. Creators and capital now move fluidly across borders in pursuit of more hospitable innovation and investment climates. Recently, some high-profile tech CEOs like Elon Musk and Joe Lonsdale have relocated from California to Texas, citing tax and regulatory burdens as key factors in their decisions. Oracle, America’s second-largest software company, also just announced it is moving its corporate headquarters from Silicon Valley to Austin, just over a week after Hewlett Packard Enterprise said it too is moving its headquarters from California to Texas—in this case, Houston. “Voting with your feet” might actually still mean something, especially when it is major tech companies and venture capitalists abandoning high-tax, over-regulated jurisdictions.

Advocacy Remains Essential

But we shouldn’t imagine that technological change is inevitable or fall into the trap of thinking of it as a sort of liberation theology that will magically free us from repressive government controls. Policy advocacy still matters. Innovation defenders will need to continue to push back against the most burdensome precautionary policies, while also promoting reforms that protect entrepreneurial endeavors. The courts offer us great hope. Groups like the Institute for Justice, the Goldwater Institute, the Pacific Legal Foundation and others continue to litigate successfully in defense of the freedom to innovate. While the best we can hope for in the legislative arena may be perpetual stalemate, these and other public interest law firms are netting major victories in courtrooms across America. Sometimes court victories force positive legislative changes, too. For example, in 2015, the Supreme Court handed down North Carolina State Board of Dental Examiners v. Federal Trade Commission, which held that local government cannot claim broad immunity from federal antitrust laws when it delegates power to nongovernmental bodies, such as licensing boards. This decision made much-needed occupational licensing reform an agenda item across America. Many states introduced or adopted bipartisan legislation aimed at reforming or sunsetting occupational licensing rules that undermine entrepreneurship. Even more exciting are proposals that would protect citizens’ “right to earn a living.” This right would allow individuals to bring suit if they believe a regulatory scheme or decision has unnecessarily infringed upon their ability to earn a living within a legally permissible line of work. Meanwhile, there have been ongoing state efforts to advance “right to try” legislation that would expand medical treatment options for Americans tired of overly paternalistic health regulations. Perhaps, then, it is too early to close the book on the permissionless innovation era. While dark political clouds loom over America’s technological landscape, there are still reasons to believe the entrepreneurial spirit can prevail.
]]>
https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/feed/ 1 76823
5 Tech Policy Topics to Follow in the Biden Administration and 117th Congress https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/ https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/#comments Thu, 12 Nov 2020 14:08:17 +0000 https://techliberation.com/?p=76818

In a five-part series at the American Action Forum, I presented prior to the 2020 presidential election the candidates’ positions on a range of tech policy topics including: the race to 5GSection 230antitrust, and the sharing economy. Now that the election is over, it is time to examine what topics in tech policy will gain more attention and how the debate around various tech policy issues may change. In no particular order, here are five key tech policy issues to be aware of heading into a new administration and a new Congress. 

The  Use of Soft Law for Tech Policy 

In 2021, it is likely America will still have a divided government with Democrats controlling the White House and House of Representatives and Republicans expected to narrowly control the Senate. The result of a divided government, particularly between the two houses of Congress, will likely be that many tech policy proposals face logjams. The result will likely be that many of the questions of tech policy lack the legislation or hard law framework that might be desired. As a result, we are likely to continue to see “soft law”—regulation by various sub-regulatory means such as guidance documents, workshops, and industry consultations—rather than formal action. While it appears we will see more formal regulatory action from the administrative state as well in a Biden Administration, these actions require quite a process through comments and formal or informal rulemaking. As technology continues to accelerate, many agencies turn to soft law to avoid “pacing problems” where policy cannot react as quickly as technology and rules may be outdated by the time they go into effect. 

A soft law approach can be preferable to a hard law approach as it is often able to better adapt to rapidly changing technologies. Policymakers in this new administration, however, should work to ensure that they are using this tool in a way that enables innovation and that appropriate safeguards ensure that these actions do not become a crushing regulatory burden. 

Return of  the  Net Neutrality  Debate 

One key difference between President Trump and President-elect Biden’s stances on tech policy concerns whether the Federal Communication Commission (FCC) should categorize internet service providers (ISPs) as Title II “common carrier services,” thereby enabling regulations such as “net neutrality” that places additional requirements on how these service providers can prioritize data. President-elect Biden has been clear in the past that he favors reinstating net neutrality. 

The imposition of this classification and regulations occurred during the Obama Administration and the FCC removed both the classification under Title II and the additional regulations for “net neutrality” during the Trump Administration. Critics of these changes made many hyperbolic claims at the time such as that Netflix would be interrupted or that ISPs would use the freedom in a world without net neutrality to block abortion resources or pro-feminist groups. These concerns have proven to be misguided. If anything, the COVID-19 pandemic has shown the benefits to building a robust internet infrastructure and expanded investment that a light-touch approach has yielded. 

It is likely that net neutrality will once again be debated. Beyond just the imposition of these restrictions, a repeated change in such a key classification could create additional regulatory uncertainty and deter or delay investment and innovation in this valuable infrastructure. To overcome such concerns, congressional action could help fashion certainty in a bipartisan and balanced way to avoid a back-and-forth of such a dramatic nature. 

Debates Regarding  Sharing Economy Providers   Classification  as Independent Contractors 

California voters passed Proposition 22 undoing the misguided reclassification of app-based service drivers as employees rather than independent contractors under AB5; during the campaign, however, President-elect Biden stated that he supports AB5 and called for a similar approach nationwide. Such an approach would make it more difficult on new sharing economy platforms and a wide range of independent workers (such as freelance journalists) at a time when the country is trying to recover economically.  

Changing classifications to make it more difficult to consider service providers as independent contractors makes it less likely that platforms such as Fiverr or TaskRabbit could provide platforms for individuals to offer their skills. This reclassification as employees also misunderstands the ways in which many people choose to engage in gig economy work and the advantages that flexibility has. As my AAF colleague Isabel Soto notes, the national costs of a similar approach found in the Protecting the Right to Organize (PRO) Act “could see between $3.6 billion and $12.1 billion in additional costs to businesses” at a time when many are seeking to recover during the recession. Instead, both parties should look for solutions that continue to allow the benefits of the flexible arrangements that many seek in such work, while allowing for creative solutions and opportunities for businesses that wish to provide additional benefits to workers without risking reclassification. 

Shifting Conversations and Debates Around Section 230 

Section 230 has recently faced most of its criticism from Republicans regarding allegations of anti-conservative bias. President-elect Biden, however, has also called to revoke Section 230 and to set up a taskforce regarding “Online Harassment and Abuse.” While this may seem like a positive step to resolving concerns about online content, it could also open the door to government intervention in speech that is not widely agreed upon and chip away at the liability protection for content moderation. 

For example, even though the Stop Enabling Sex Trafficking Act was targeting the heinous crime of sex trafficking (which was already not subject to Section 230 protection) was aimed at companies such as Backpage where it was known such illegal activity was being conducted, it has resulted in legitimate speech such as Craigslist personal ads being removed  and companies such as Salesforce being subjected to lawsuits for what third parties used their product for. A carveout for hate speech or misinformation would only pose more difficulties for many businesses. These terms to do not have clearly agreed-upon meanings and often require far more nuanced understanding for content moderation decisions. To enforce changes that limit online speech even on distasteful and hateful language in the United States would dramatically change the interpretation of the First Amendment that has ruled such speech is still protected and would result in significant intrusion by the government for it to be truly enforced. For example, in the UK, an average of nine people a day were questioned or arrested over offensive or harassing “trolling” in online posts, messages, or forums under a law targeting online harassment and abuse such as what the taskforce would be expected to consider. 

Online speech has provided new ways to connect, and Section 230 keeps the barriers to entry low. It is fair to be concerned about the impact of negative behavior, but policymakers should also recognize the impact that online spaces have had on allowing marginalized communities to connect and be concerned about the unintended consequences changes to Section 230 could have. 

Continued Antitrust Scrutiny of “Big Tech” 

One part of the “techlash” that shows no sign of diminishing in the new administration or new Congress is using antitrust to go after “Big Tech.” While it remains to be seen if the Biden Department of Justice will continue the current case against Google, there are indications that they and congressional Democrats will continue to go after these successful companies with creative theories of harm that do not reflect the current standards in antitrust. 

Instead of assuming a large and popular company automatically merits competition scrutiny  or attempting to utilize antitrust to achieve policy changes for which it is an ill-fitted tool, the next administration should return to the principled approach of the consumer welfare standard. Under such an approach, antitrust is focused on consumers and not competitors. In this regard, companies would need to be shown to be dominant in their market, abusing that dominance in some ways, and harming consumers. This approach also provides an objective standard that lets companies and consumers know how actions will be considered under competition law. With what is publicly known, the proposed cases against the large tech companies fail at least one element of this test. 

There will likely be a shift in some of the claimed harms, but unfortunately scrutiny of large tech companies and calls to change antitrust laws to go after these companies are likely to continue. 

Conclusion 

There are many other technology and innovation issues the next administration and Congress will see. These include not only the issues mentioned above, but emerging technologies like 5G, the Internet of Things, and autonomous vehicles. Other issues such as the digital divide provide an opportunity for policymakers on both sides of the aisle to come together and have a beneficial impact and think of creative and adaptable solutions. Hopefully, the Biden Administration and the new Congress will continue a light-touch approach that allows entrepreneurs to engage with innovative ideas and continues American leadership in the technology sector. 

]]>
https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/feed/ 1 76818
On Defining “Industrial Policy” https://techliberation.com/2020/09/03/on-defining-industrial-policy/ https://techliberation.com/2020/09/03/on-defining-industrial-policy/#respond Thu, 03 Sep 2020 16:26:20 +0000 https://techliberation.com/?p=76808

In his debut essay for the new Agglomerations blog, my former colleague Caleb Watney, now Director of Innovation Policy for the Progressive Policy Institute, seeks to better define a few important terms, including: technology policy, innovation policy, and industrial policy. In the end, however, he decides to basically dispense with the term “industry policy” because, when it comes to defining these terms, “it is useful to have a limiting principle and it’s unclear what the limiting principle is for industrial policy.”

I sympathize. Debates about industrial policy are frustrating and unproductive when people cannot even agree to the parameters of sensible discussion. But I don’t think we need to dispense with the term altogether. We just need to define it somewhat more narrowly to make sure it remains useful. First, let’s consider how this exact same issue played out three decades ago. In the 1980s, many articles and books featured raging debates about the proper scope of industrial policy. I spent my early years as a policy analyst devouring all these books and essays because I originally wanted to be a trade policy analyst. And in the late 1980s and early 1990s, you could not be a trade policy analyst without confronting industrial policy arguments.

This was the era of what some called “Japan, Inc.” and Japan-bashing. South Korea and Taiwan were also part of that discussion, but the primary focus was “the Japan Model” and whether it represented the optimal industrial policy for the modern economy. That “Japan Model” sounds much like what is heard today when pundits reference China and its industrial policy model: Generous (and highly targeted) R&D investments, government-led public-private consortia, industrial trade policies (a combination of export assistance plus restrictions on imports and foreign investment), and other forms of targeted government support for specific sectors or technological developments. In the 1980s Japan’s economy started expanding rapidly and many Japanese multinationals began making major investments in US businesses and properties. The Japanese government played an active role in facilitating much of this. Suddenly, lots of people in the US were debating the wisdom of America falling in line and adopting its own industrial policy to counter Japan. Panic was in the air in academic and legislative circles. Lawmakers were literally smashing Japanese electronics with sledgehammers on the stairs of the US Capitol. Meanwhile, pundits were publishing a steady steam of pessimistic books with titles asking, Can America Compete?, while others suggested that the US was Trading Places with Japan.
THE COMING WAR WITH JAPAN | Kirkus Reviews Japan-loathing probably reached its apex around 1991 or ’92 with the publication of the non-fiction book, The Coming War with Japan, and then Michael Crichton’s fictional book (and then adapted movie), Rising Sun.  Japan’s new economic model was going to steamroll US innovators and allow them to dominate the global economy for decades to come. Three decades later, we know how all this played out. The US never went to war again with Japan. We just kept trading peacefully with them, thankfully. Meanwhile, the “Japan, Inc.” industrial policy model didn’t quite pan out the way they hoped (or that US pundits feared). In a 2007 report, Marcus Noland of the Peterson Institute for International Economics summarized Japan’s industrial policy results in bleak terms:
Japan faces significant challenges in encouraging innovation and entrepreneurship. Attempts to formally model past industrial policy interventions uniformly uncover little, if any, positive impact on productivity, growth, or welfare. The evidence indicates that most resource flows went to large, politically influential “backward” sectors, suggesting that political economy considerations may be central to the apparent ineffectiveness of Japanese industrial policy.
But I don’t want to get diverted into the specifics of why Japan’s industrial policy didn’t work. Rather, I just want to make the simple point that Japan definitely had an industrial policy that we can still evaluate today. We should not abandon all use of the term industrial policy because, once defined in a more focused fashion, it remains a useful concept worthy of serious academic study and deliberation.
Jump back to the mid-80s and flip through the individual contributions to this AEI book on The Politics of Industrial Policy. It features hot debates over the exact issue we’ve still trying to figure out today. Essays by Aaron Wildavsky, Thomas McCraw, and James Fallows generally argued for a broad conception of what industrial policy should include. Others such as economist Herbert Stein insisted upon a much narrower reading of the term. Into that debate stepped economic historian Ellis W. Hawley with a wonderful essay on industrial policy efforts in the pre-New Deal era. Hawley began his essay with what I still regard as the best understanding of what “industry policy” really means in practice. Here is Hawley’s definition:
By industrial policy I mean a national policy aimed at developing or retrenching selected industries to achieve national economic goals. In this usage, I follow those who distinguish such a policy, both from policies aimed at making the macroeconomic environment more conducive to industrial development in general and from the totality of microeconomic interventions aimed at particular industries. To have an industrial policy, a nation must not only be intervening at the microeconomic level but also have a planning and coordinating mechanism through which the intervention is rationally related to national goals, a general pattern of microeconomic targets is decided upon, and particular industrial programs are worked out and implemented.
I think Hawley’s conception of industrial policy gets it just right. Crucially, he clearly distinguished industrial policy from “policy” more generally. And he also specifies the requirement that “a planning and coordinating mechanism” is necessary and that targets are established.
]]>
https://techliberation.com/2020/09/03/on-defining-industrial-policy/feed/ 0 76808
Symposium: Hirschman’s “Exit, Voice & Loyalty” at 50 https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/ https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/#respond Thu, 27 Aug 2020 15:28:01 +0000 https://techliberation.com/?p=76803

Albert Hirschman and the Social Sciences: A Memorial Roundtable – Humanity JournalThis month’s Cato Unbound symposium features a conversation about the continuing relevance of Albert Hirschman’s Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States, fifty years after its publication. It was a slender by important book that has influenced scholars in many different fields over the past five decades. The Cato symposium features a discussion between me and three other scholars who have attempted to use Hirschman’s framework when thinking about modern social, political, and technological developments.

My lead essay considers how we might use Hirschman’s insights to consider how entrepreneurialism and innovative activities might be reconceptualized as types of voice and exit. Response essays by Mikayla NovakIlya Somin, and Max Borders broaden the discussion to highlight how to think about Hirschman’s framework in various contexts. And then I returned to the discussion this week with a response essay of my own attempting to tie those essays together and extend the discussion about how technological innovation might provide us with greater voice and exit options going forward. Each contributor offers important insights and illustrates the continuing importance of Hirschman’s book.

I encourage you to jump over to Cato Unbound to read the essays and join the conversations in the comments.

 

]]>
https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/feed/ 0 76803
Existential Risk & Emerging Technology Governance https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/ https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/#respond Wed, 05 Aug 2020 16:51:39 +0000 https://techliberation.com/?p=76795

“The world should think better about catastrophic and existential risks.” So says a new feature essay in The Economist. Indeed it should, and that includes existential risks associated with emerging technologies.

The primary focus of my research these days revolves around broad-based governance trends for emerging technologies. In particular, I have spent the last few years attempting to better understand how and why “soft law” techniques have been tapped to fill governance gaps. As I noted in this recent post compiling my recent writing on the topic;

soft law refers to informal, collaborative, and constantly evolving governance mechanisms that differ from hard law in that they lack the same degree of enforceability. Soft law builds upon and operates in the shadow of hard law. But soft law lacks the same degree of formality that hard law possess. Despite many shortcomings and criticisms, compared with hard law, soft law can be more rapidly and flexibly adapted to suit new circumstances and address complex technological governance challenges. This is why many regulatory agencies are tapping soft law methods to address shortcomings in the traditional hard law governance systems.

I argued in recent law review articles as well as my latest book, despite its imperfections, I believe that soft law has an important role to play in filling governance gaps that hard law struggles to address. But there are some instances where soft law simply will not cut it. As I noted in Chapter 7 of my new book, there may be very legitimate existential threats out there that we should be spending more time addressing because the scope, severity, and probability of severe risk are present. Hard law solutions will still be needed in such instances, even if they may be challenged by many of the same factors that are fueling the shift toward soft law for other sectors or issues.

Of course, we are immediately confronted with a definitional challenge: What exactly counts as an “existential risk”? I argue that it is important that we spend more time discussing this question because far too many people today throw around the term “existential risk” when referencing risks that are noting of the sort. For example, increased social media use may indeed be a threat to data security and personal privacy, but those risks are not “existential” in the same way chemical or nuclear weapons proliferation are threats to our existence. This gets to the heart of the matter: the root of “existential” is existence. By definition, an existential risk needs to have some direct bearing on the future of humanity’s ability to survive. Efforts to conflate lesser risks into existential ones cheapen the very meaning of the term.

This shouldn’t be controversial, but somehow it is. Countless pundits today want to suggest that almost every new technological development might somehow pose an existential threat to humanity. But it just isn’t the case. That does not mean their concerns are not important, or potentially deserving of some government attention. It simply means that we need to take risk prioritization more seriously. If everything is an existential risk, than nothing is an existential risk. We must have some sort of ranking of risks if we hope to have a rational conversation about how to use scare societal resources to address matters of public concern.

These issues are discussed at far greater length in the sections of my book (pgs. 228-240) that you will find embedded down below. How should society deal with “killer robots” or the accelerated development of genetic editing capabilities? What kind of coordinated compliance regime might help address rouge actors who seek to use new technological capabilities for nefarious purposes? What can we learn from past global enforcement efforts for chemical and nuclear weapons? These are just some of the questions I take on in this section of the book and plan to spend more time addressing in coming years. Scan these pages from the book to see my initial thoughts on these matters. But I am really just scratching the surface here. I’ll have much more to say on these matters in coming months and years. It’s a massively complicated topic.

]]>
https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/feed/ 0 76795
Future Aviation, Drones, and Airspace Markets https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/ https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/#respond Wed, 22 Jul 2020 13:55:40 +0000 https://techliberation.com/?p=76767

My research focus lately has been studying and encouraging markets in airspace. Aviation airspace is valuable but has been assigned to date by regulatory mechanisms, custom, and rationing by industry agreement. This rationing was tolerable decades ago when airspace use was relatively light. Today, regulators need to consider markets in airspace–allowing the demarcation, purchase, and transfer of aerial corridors–in order to give later innovators airspace access, to avoid anticompetitive “route squatting,” and to serve as a revenue stream for governments, much like spectrum auctions and offshore oil leases.

Last month, the FAA came out in favor of “urban air mobility corridors”–point-to-point aerial highways that new eVTOL, helicopter, and passenger drones will use. It’s a great proposal, but the FAA’s plan for allocating and sharing those corridors is largely to let the industry negotiate it among themselves (the “Community Business Rules”):

Operations within UAM Corridors will also be supported by CBRs collaboratively developed by the stakeholder community based on industry standards or FAA guidelines and approved by the FAA.

This won’t end well, much like Congress and the Postmaster General letting the nascent airlines in the 1930s divvy up air routes didn’t end well–we’re still living with the effects of those anticompetitive decisions. Decades later the FAA is still refereeing industry fights over routes and airport access.

Rather, regulators should create airspace markets because otherwise, as McKinsey analysts noted last year about urban air mobility:

first movers will have an advantage by securing the most attractive sites along high-traffic routes.

Airspace today is a common-pool resource rationed via regulation and custom. But with drones, eVTOL, and urban air mobility, congestion will increase and centralized air traffic control will need to give way to a more federated and privately-managed airspace system. As happened with spectrum: a demand shock to an Ostrom-ian common pool resource should lead to enclosure and “propertization.”

Markets in airspace probably should have been created decades ago once airline routes became fixed and airports became congested. Instead, the centralized, regulatory rationing led to large economic distortions:

For example, in 1968, nearly one-third of peak-time New York City air traffic–the busiest region in the US–was general aviation (that is, small, personal) aircraft. To combat severe congestion, local authorities raised minimum landing fees by a mere $20 (1968 dollars) on sub 25-seat aircraft. General aviation traffic at peak times immediately fell over 30%—suggesting that a massive amount of pre-July 1968 air traffic in the region was low-value. The share of aircraft delayed by 30 or more minutes fell from 17% to about 8%.

This pricing of airspace and airport access was half-hearted and resisted by incumbents. Regulators fell back on rationing via the creation of “slots” at busy airports, which were given mostly to dominant airlines. Slots have the attributes of property–they can be defined, valued, sold, transferred, borrowed against. But the federal government refuses to call it property, partly because of the embarrassing implications. The GAO said in 2008:

[the] argument that slots are property proves too much—it suggests that the agency [FAA] has been improperly giving away potentially millions of dollars of federal property, for no compensation, since it created the slot system in 1968.

It may be too late to have airspace and route markets for traditional airlines–but it’s not too late for drones and urban air mobility. Demarcating aerial corridors should proceed quickly to bring the drone industry and services to the US. As Adam has pointed out, this is a global race of “innovation arbitrage”–drone firms will go where regulators are responsive and flexible. Federal and state aviation officials should not give away valuable drone routes, which will end up going to first-movers and the politically powerful. Airspace markets, in contrast, avoid anticompetitive lock-in effects and give drone innovators a chance to gain access to valuable routes in the future.

Research and Commentary on Airspace Markets

Law journal article. The North Carolina JOLT published my article, “Auctioning Airspace,” in October 2019. I argued for the FAA to demarcate and auction urban air mobility corridors (SSRN).

Mercatus white paper. In March 2020 Connor Haaland and I explained that federal and state transportation officials could demarcate and lease airspace to drone operators above public roads because many state laws allow local and state authorities to lease such airspace.

Law journal article. A student note in a 2020 Indiana Law Journal issue discusses airspace leasing for drone operations (pdf).

FAA report. The FAA’s Drone Advisory Committee in March 2018 took up the idea of auctioning or leasing airspace to drone operators as a way to finance the increased costs of drone regulations (pdf).

GAO report. The GAO reviewed the idea of auctioning or leasing airspace to drone operators in a December 2019 report (pdf).

Airbus UTM white paper. The Airbus UTM team reviewed the idea of auctioning or leasing airspace to UAM operators in a March 2020 report, “Fairness in Decentralized Strategic Deconfliction in UTM” (pdf).

Federalist Society video. I narrated a video for the Federalist Society in July 2020 about airspace design and drone federalism (YouTube).

Mercatus Center essay. Adam Thierer, Michael Koutrous, and Connor Haaland wrote about drone industry red tape how the US can’t have “innovation by regulatory waiver,” and how to accelerate widespread drone services.

I’ve discussed the idea in several outlets and events, including:

Podcast Episodes about Drones and Airspace Markets

  • In a Federalist Society podcast episode, Adam Thierer and I discussed airspace markets and drone regulation with US Sen. Mike Lee. (Sen. Lee has introduced a bill to draw a line in the sky at 200 feet in order to clarify and formalize federal, state, and local powers over low-altitude airspace.)
  • Tech Policy Institute podcast episode with Sarah Oh, Eli Dourado, and Tom Lenard.
  • Macro Musings podcast episode with David Beckworth.
  • Drone Radio Show podcast episode with Randy Goers.
  • Drones in America podcast episode with Grant Guillot.
  • Uncommon Knowledge podcast episode with Juliette Sellgren.
  • Building Tomorrow podcast episode with Paul Matzko and Matthew Feeney.
  • sUAS News podcast episode and interview.
]]>
https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/feed/ 0 76767
How Are We Ever Going to Stop the Blockbuster Video Monopoly? https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/ https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/#respond Tue, 21 Jul 2020 14:15:58 +0000 https://techliberation.com/?p=76771

Does anyone remember Blockbuster and Hollywood Video? I assume most of you do, but wow, doesn’t it seem like forever ago when we actually had to drive to stores to get movies to watch at home? What a drag that was!

Yet, just 15 years ago, that was the norm and those two firms were the titans of video distribution, so much so that federal regulators at the Federal Trade Commission looked to stop their hegemony through antitrust intervention. But then those firms and whatever “market power” they possessed quickly evaporated as a wave of Schumpeterian creative destruction swept through video distribution markets. Both those firms and antitrust regulators had completely failed to anticipate the tsunami of technological and marketplace changes about to hit in the form of alternative online video distribution platforms as well as the rise of smartphones and robust nationwide mobile networks.

Today, this serves as a cautionary tale of what happens when regulatory hubris triumphs over policy humility, as Trace Mitchell and I explain in this new essay for  National Review Online entitled, “The Crystal Ball of Antitrust Regulators Is Cracked.” As we note:

There is no discernable end point to the process of entrepreneurial-driven change. In fact, it seems to be proliferating rapidly. To survive, even the most successful companies must be willing to quickly dispense with yesterday’s successful business plans, lest they be steamrolled by the relentless pace of technological change and ever-shifting consumer demands. It is easy to understand why some people find it hard to imagine a time when Amazon, Apple, Facebook, and Google won’t be quite as dominant as they are today. But it was equally challenging 20 years ago to imagine that those same companies could disrupt the giants of that era.

Hopefully today’s policymakers will have a little more patience and trust competition and continued technological innovation to bring us still more wonderful video choices.

[OC] Blockbuster Video US store locations between 1986 and 2019 from r/dataisbeautiful
//embed.redditmedia.com/widgets/platform.js]]>
https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/feed/ 0 76771
Encounters of the Drone Kind: Drone Shootings and No-Fly Zones https://techliberation.com/2020/06/26/encounters-of-the-drone-kind-drone-shootings-and-no-fly-zones/ https://techliberation.com/2020/06/26/encounters-of-the-drone-kind-drone-shootings-and-no-fly-zones/#respond Fri, 26 Jun 2020 12:49:44 +0000 https://techliberation.com/?p=76756

By Brent Skorup & Connor Haaland

We think drones are exciting technology with the potential to improve medical logistics, agriculture, transportation, and other industries. But drones fly at low altitudes and, to many Americans, drones represent a nuisance, trespasser, or privacy invasion when they fly over private property. This is why we think the FAA and states should work together to lease airspace above public roads—it would free up millions of miles of low-altitude airspace for operations while avoiding many lawsuits from public and private landowners.

In the meantime, states and landowners are pushing back on certain drone activities. Per Prof. Stephen Migala, about 10 states have created “no-fly zones” for drones, prohibiting flights over government property, state forests, or sensitive areas. Most state airspace rules prohibit drones at low-altitudes over “critical infrastructure” like nuclear, gas and electric facilities, bridges, dams, and communication networks. Some states prohibit drones over jails, prisons, and schools.

In Texas, in fact, there is litigation over a state ban on photography drones above critical infrastructure, sports venues, and prisons. One of the legal issues is whether state police powers over trespass, nuisance, and privacy allow states to exclude drones from low-altitude airspace. As we’ve pointed out in a GovTech piece, this is a festering issue in drone regulation—no one knows at what altitude private property (and state police powers) begins.

For private property owners who don’t want drones flying over their property, they might be able to bring a trespass lawsuit under existing state law. Around 20 states expressly vest air rights with landowners. However, many states also recognize a privilege of non-disruptive flight, so it’s unclear if a landowner would win a lawsuit in those states. We’re unaware of the issue being litigated.

Unfortunately, many landowners and annoyed neighbors are taking matters into their own hands and shooting drones out of the sky. We’ve identified over a dozen such encounters in the past eight years, though there are likely some near-misses and unreported cases out there.  (Don’t shoot a drone–it’s dangerous and, as the cases below show, you risk being arrested and convicted for criminal mischief or some other crime.)

  1. In November of 2012, unknown shooters in Bucks County, Pennsylvania shot down a drone that was flying over their hunt club. The drone was flown by an animal rights group to bring scrutiny to pigeon shooting and this was the fourth time the activists’ drone had been shot down. No criminal charges appear to have been filed.
  2. In October of 2014, a man shot down a drone in Lower Township, New Jersey. It’s unclear if the drone was hovering over his property or a neighbor’s. The man plead guilty to criminal mischief. 
  3. In November 2014 in Modesto, California, a man allegedly instructed his minor son to shoot his neighbor’s drone out of the sky, and the drone was destroyed. The neighbor claims the drone was not over the man’s property and won $850 in small claims court from the man for damages and costs.
  4. In July of 2015 in Bullitt County, Kentucky, William Meredith,  annoyed at a drone flying over his backyard while grilling with friends, shot the drone when it flew over his property. The drone’s owner, a neighbor, called the police upon discovering his destroyed drone. Meredith was arrested and charged under local law for firing a gun in a populated area. At the highly publicized trial in state court, the judge dismissed the charges with a brief statement that Meredith was justified in shooting because of the invasion of privacy.
  5. In April of 2016, an unnamed woman shot down a drone in Edmond, Oklahoma. The drone was flown by a construction company employee who was inspecting gutters in the neighborhood. It’s unclear if the drone was flying over the woman’s property. The case was investigated by the police, who said that they did not expect to file charges
  6. An unknown shooter in Aspen, Colorado shot down a drone during 4th of July fireworks in 2016. It’s unclear if the drone was over the shooter’s property. The pilot of the fallen drone filed a report with local police and the FAA but the shooter remains a mystery.
  7. In August of 2016, a woman allegedly shot down a drone in The Plains, Virginia with her 20-gauge shotgun. The woman alleged that the drone hovered 25 to 30 feet above her property and she believed it was being used to spy on her movie-star neighbor, Robert Duvall. The two men flying the drone left the scene when she told them she was calling the police. No charges were filed. 
  8. In April of 2017, an unknown person in Morgan County, Georgia shot down a drone with a .22 rifle. It’s unclear whose property the drone was flying over. The drone owner filed a report but a suspect was never identified.
  9. In October of 2017, a man allegedly shot down a drone in Jackson County, Oregon with his pellet rifle and later turned himself in for arrest. The photography drone was flying over a state recreation area. The local prosecutor charged the shooter with first degree criminal mischief, a felony in Oregon. (The drone’s owner feels that a felony charge is excessive. With a Google search, it’s unclear whether the man was convicted.)
  10. In May of 2018, a man allegedly attempted to shoot down a drone with his handgun in Bradenton, Florida. It was a neighbor’s drone and the man claims it was on his property, hovering a few feet above the ground. Police were called and warned the man about the danger and legal risk of shooting drones. No charges were filed.
  11. In February of 2019, a man allegedly shot down a drone in Long Island, New York with a shotgun. The drone was being used by an animal rescue group to find a lost dog. It’s unclear if the drone was flying over the man’s property. He was charged with third-degree criminal mischief and prohibited use of a weapon.
  12. In May of 2020, a man allegedly shot down a drone flying over a chicken processing plant in Watonwan County, Minnesota. The drone operator was apparently taking video of the plant as a citizen-journalist. The man was charged with two felonies: criminal damage to property and reckless discharge of a firearm in city limits. 
  13. In June 2020, someone shot a drone flying somewhere in western Pennsylvania at 390 feet above the ground. Despite being grazed and damaged, the drone managed to safely operate and land. It’s unclear if the drone was over the shooter’s property. The shooter is unknown and the drone operator contacted state police but has not filed a complaint.

As you can see, the legal penalties for shooting a drone vary based on the circumstances and the prosecutor. Some got off with warnings but a few were charged with a felony under state law. Arguably, someone shooting a drone violates federal law, which imposes penalties on anyone who

willfully . . . damages, destroys, disables, or wrecks . . . any civil aircraft used . . . in interstate . . . commerce.

Federal penalties for willfully damaging an aircraft are stiff—fines and up to 20 years’ imprisonment. We’re unaware of federal prosecutors bringing a case against someone for shooting a drone. Perhaps federal prosecutors feel it’s excessive to use this statute, which was written with passenger planes in mind. Further, it’s unclear when drones are used in interstate commerce. As one federal judge said in a 2016 drone regulation case, Huerta v. Haughwout:

the FAA believes it has regulatory sovereignty over every cubic inch of outdoor air in the United States. . . . [I]t is far from clear that Congress intends—or could constitutionally intend—to regulate all that is airborne on one’s own property and that poses no plausible threat to or substantial effect on air transport or interstate commerce in general.

Hopefully lawmakers will clear up the ambiguity and demarcate where property rights end. As we pointed out in our recent 50-state drone report card, creating drone highways would prevent many issues. Congress should also consider drawing a federal-state dividing line in the sky, much like it drew a dividing line in the ocean in the Submerged Lands Act for energy development. For now, landowners, drone operators, the FAA, and state governments are all trying to determine the limits of their authority.

]]>
https://techliberation.com/2020/06/26/encounters-of-the-drone-kind-drone-shootings-and-no-fly-zones/feed/ 0 76756
The Section 230 Executive Order, Free Speech, and the FCC https://techliberation.com/2020/06/03/the-section-230-executive-order-free-speech-and-the-fcc/ https://techliberation.com/2020/06/03/the-section-230-executive-order-free-speech-and-the-fcc/#comments Wed, 03 Jun 2020 18:50:22 +0000 https://techliberation.com/?p=76746

Section 230 is in trouble. Both presidential candidates have made its elimination a priority. In January, Joe Biden told the New York Times that the liability protections for social media companies should be revoked “immediately.” This week, President Trump called for revoking Section 230 as well. Most notably, after a few years of threatening action, the President issued an Executive Order about Section 230, its liability protections, and free speech online. (My article with Jennifer Huddleston about Section 230, its free speech benefits, and the common law precedents for Section 230 was published in the Oklahoma Law Review earlier this year.) 

There have been thousands of reactions to and news stories about the Executive Order and a lot of hyperbole. No, the Order doesn’t eliminate tech companies’ Section 230 protection and make it easier for conservatives to sue. No, the Order isn’t “plainly illegal.”

It’s fairly modest in reach actually. The Executive Order can’t change the deregulatory posture and specific protections of Section 230 but the President has broad authority to interpret the unclear meanings of statutes. Some of the thoughtful responses that stuck out are from Adam Thierer, Jennifer Huddleston, Patrick Hedger, and Adam White. I won’t reiterate what they’ve said but will focus on what the Order does and what the FCC can do.

Election Year Jawboning

The Order is a political document. For the baseball fans, it’s the political equivalent of a brushback pitch to tech companies–the pitcher throws an inside fastball intended to scare the batter without hitting him. (Enjoy 4 minutes of brushback pitches on YouTube.) Most of the time, a pitcher won’t get ejected by the umpire for throwing a brushback pitch. Likewise, here, I don’t see much chance of the Order being struck down by judges. The Order was wordsmithed, even in the last 24 hours before release, in a way to avoid legal troubles.

As Jesse Blumenthal points out in Slate, the Order is just the latest example of the long tradition of politicians using informal means and publicity to pressure media outlets. The political threats to TV and radio broadcasters during the Nixon, LBJ, and Kennedy years were extreme examples and are pretty well-documented.

More recently, there was a huge amount of jawboning of media companies in the runup to the 2004 election. Newspaper condemnation and legal threats forced a documentary critical of John Kerry off the air nationwide. Stations either pulled the documentary or only ran a few minutes of it because activists’ threatened to challenge TV station licenses for years at the FCC if stations ran the documentary. Many people remember the Citizens United case, which derived from the FEC’s censorship of an anti-John Kerry documentary in 2004 and an anti-Hillary Clinton documentary in 2008. Less remembered is that the conservative group started creating political documentaries only after the FEC rejected its complaint to get a Michael Moore’s anti-Bush documentary, Fahrenheit 9/11, off the air before the 2004 election.

The Title II net neutrality regulations were, per advocates close to the Obama White House, imposed largely to rally the base after Democrats’ 2014 midterm losses.

Implementation of the Executive Order

The timing of the Order–a few months before the election–seems intended to accomplish two things:

  1. Rally the Trump base by publicly threatening tech companies’ liability protections and provoking tech companies’ ire.
  2. Focus public and media scrutiny on tech companies so they think twice before suspending, demonetizing, or banning conservatives online.

The legal effect in the short term is negligible. Unless the relevant agencies (DOJ, FTC, NTIA, FCC) patched something together hastily, the Order won’t have an effect on tech companies and their susceptibility to lawsuits in the near term. The most immediate practical effect of the Order is the instructions to the NTIA. The agency is directed to petition the FCC to clarify what some unclear provisions of Sec. 230 mean, particularly the “good faith” requirement and how (c)(2) in the statute interacts with (c)(1).

It’s not clear why the Order makes this roundabout instruction to the NTIA and FCC. (The FCC is an independent agency and can refuse instructions from the White House.) “Good faith” is a term of art in contract law. It seems to me that referring this to the DOJ’s Office of Legal Counsel, not the FCC, would be the natural place for an administration to turn to to interpret legal terms of art and how provisions in federal statutes interact with each other. 

One reason the White House might use the roundabout method is because the administration knows the downsides of weakening Section 230 and isn’t actually intending to make material changes to existing interpretations of Sec. 230. The roundabout request to the FCC allows the White House to do something on the issue without upsetting established interpretations. And if the FCC refuses to take it up, the White House can tell supporters they tried but it was out of their hands.

Alternatively it could be that this was referred to the FCC because Section 230 is within the Communications Act and the FCC has more expertise and jurisdiction in communications law. The FCC has interpreted Section 230 before and has also interpreted what “good faith” means because Congress requires good faith negotiations between cable TV and broadcast TV operators.

If they took it up, I suspect FCC review would be perfunctory. The NTIA petition need not even get decided at the commission level. The FCC can delegate issues to bureau chiefs or other FCC staff. Bureaus can respond to a petition with an enforcement advisory or, after notice-and-comment, a declaratory ruling regarding the interpretative issues. It would take months to complete, but the full commission could also consider and rule on the NTIA petition.

But I suspect the commissioners don’t want to get dragged into election-year controversies. (As I mentioned above, White House staff may have even sent this to the FCC in order to let the issue die quietly.) The FCC is busy with pressing issues like spectrum auctions and rural broadband. Further, the NTIA-FCC relationship, while cordial, is not particularly good at the moment. Finally, the commissioners know the agency’s history of mission creep and media regulation. The Republican majority has consistently tried to untangle itself from legacy media regulations. An FCC inquiry into what “good faith” means in the statute and how (c)(2) in the statute interacts with (c)(1)–while an intriguing academic and legal interpretation exercise–would be a small but significant step towards FCC oversight of Internet services.

Section 230 is in Trouble

The fact is, Section 230 is in trouble. Courts have applied it reluctantly since its inception because of its broad protections. As Prof. Eric Goldman has meticulously documented, in recent years, courts have undermined Section 230 precedent and protection.

At some level the President and his advisors know that opening the door to regulation of the Internet will end badly for right-of-center and free speech. This was the foundation of the President’s opposition to Title II net neutrality rules. As he’s stated on Twitter:

Obama’s attack  on the internet is another top down power grab. Net neutrality is the Fairness Doctrine. Will target conservative media.

https://platform.twitter.com/widgets.js

The Executive Order, while it doesn’t allow the FCC to regulate online media like Title II net neutrality did, is the Administration playing with fire. It’s essentially a bet that the Trump administration can get a short-term political win without unleashing long-term problems for conservatives and free speech online.

The Trump team may be right. But the Order, by inviting FCC involvement, represents a small step to regulation of Internet services. More significantly, there’s a reason prominent Democrats are calling for the elimination of Section 230. The trial bar, law school clinics, and advocacy nonprofits would like nothing more than to make it expensive for tech companies to defend their hosting and disseminating conservative publications and provocateurs.

Prominent Democrats are calling for the elimination of Sec. 230 and replacing it with a Fairness Doctrine for the Internet. If things go Democrats’ way, the Executive Order could give regulators, much of the legal establishment, and the left a foothold they’ve sought for years to regulate Internet services and online speech. Be careful what you wish for.

]]>
https://techliberation.com/2020/06/03/the-section-230-executive-order-free-speech-and-the-fcc/feed/ 1 76746
The Surprising Ideological Origins of Trump’s Communications Collectivism https://techliberation.com/2020/05/28/the-surprising-ideological-origins-of-trumps-communications-collectivism/ https://techliberation.com/2020/05/28/the-surprising-ideological-origins-of-trumps-communications-collectivism/#respond Thu, 28 May 2020 19:40:03 +0000 https://techliberation.com/?p=76742

President Trump and his allies have gone to war with social media sites and digital communications platforms like Twitter, Facebook, and Google. Decrying supposed anti-conservative “bias,” Trump has even floated an Executive Order aimed at “Preventing Online Censorship,” that entails many new forms of government meddling with these private speech platforms. Section 230 is their crosshairs and First Amendment restraints are being thrown to the wind.

Various others have already documented the many legal things wrong with Trump’s call for greater government oversight of private speech platforms. I want to focus on something slightly different here: The surprising ideological origins of what Trump and his allies are proposing. Because for those of us who are old-timers and have followed communications and media policy for many decades, this moment feels like deja vu all over again, but with the strange twist that supposed “conservatives” are calling for a form of communications collectivism that used to be the exclusive province of hard-core Leftists.

To begin, the truly crazy thing about President Trump and some conservatives saying that social media should be regulated as public forums is not just that they’re abandoning free speech rights, it’s that they’re betraying property rights, too. Treating private media like a “public square” entails a taking of private property. Amazingly, Trump and his followers have taken over the old “media access movement” and given it their own spin.

Media access advocates look to transform the First Amendment into a tool for social change to advance specific political ends or ideological objectives. Media access theory dispenses with both the editorial discretion rights and private property rights of private speech platforms. Private platforms become subject to the political whims of policymakers who dictate “fair” terms of access. We can think of this as communications collectivism.

The media access movement’s regulatory toolkit includes things like the Fairness Doctrine and “neutrality” requirements, right-of-reply mandates, expansive conceptions of common carriage (using “public forum” or “town square” rhetoric), agency threats, and so on. Even without formal regulation, media access theorists hope that jawboning and political pressure can persuade private platforms to run more (or perhaps sometimes less) of the content that they want (or don’t) on media platforms.

The intellectual roots of the media access movement were planted by leftist media theorists like Jerome Barron, Owen Fiss in 1960s and 1970s, and later by Marxist communications scholar Robert McChesney. In 2005, I penned this short history of media access movement and explored its aims. I also wrote two old books with chapters on the dangers of media access theory and calls for collectivizing communications and media systems. Those books were: Media Myths (2005) and A Manifesto for Media Freedom (2008, w Brian C. Anderson). The key takeaway from those essays is that the media access movement comes down to control.

The best book ever written about dangers of media access movement was Jonathan Emord’s 1991, Freedom, Technology and the First Amendment. He perfectly summarizes their goals (and now Trump’s) as follows:

  • “In short, the access advocates have transformed the marketplace of ideas from a laissez-faire model to a state-control model.”
  • “Rather than understanding the First Amendment to be a guardian of the private sphere of communication, the access advocates interpret it to be a guarantee of a preferred mix of ideological viewpoints.
  • “It fundamentally shifts the marketplace of ideas from its private, unregulated, and interactive context to one within the compass of state control, making the marketplace ultimately responsible to government for determinations as to the choice of content expressed.”

“This arrogant, elitist, anti-property, anti-freedom ethic is what drives the media access movement and makes it so morally repugnant,” I argued in that old TLF essay. That is still just as true today, even when it’s conservatives calling for collectivization of media.

It’s astonishing, yet nonetheless true, that the ideological roots of Trump’s anti-social media campaign lie in the works of those extreme Leftists and even media Marxists. He has just given media access theory his own unique nationalistic spin and sold this snake oil to conservatives.

There certainly could come a day where his opponents on the Left just take this media access playbook up again and suggest this is exactly what’s needed for Fox News and other right-leaning media outlets. If and when that does happen, Trump and other conservatives will have no one to blame but themselves for embracing this contemptible philosophical vision simply because it suited their short-term desires while they were in power.

I hope that conservatives rethink their embrace of communications collectivism, but I fear that Trump and his allies have already convinced themselves that the ends justify the means when it comes to advancing their causes or even just “owning the libs.” But there really is a strong moralistic slant to what Trump and many of his allies want. They think they are on the right side of history and that the opponents–including most media outlets and plaforms–are evil. Trump and his allies have repeatedly referred to the press as the “enemy of the American people” and endlessly lambasted social media platforms for not going along with his desires. This reflects a core tendency of all communications collectivists: a sort of ‘you’re-either-with-us-or-against-us’ attitude.

Steve Bannon scripted all this out back in 2018. Go back and read this astonishing CNN interview for a preview of what could happen next. Here’s the rundown:

>> Bannon said Big Tech’s data should be seized and put in a “public trust.” Specifically, Bannon said, “I think you take [the data] away from the companies. All that data they have is put in a public trust. They can use it. And people can opt in and opt out. That trust is run by an independent board of directors. It just can’t be that [Big Tech is] the sole proprietors of this data…I think this is a public good.” Bannon added that Big Tech companies “have to be broken up” just like Teddy Roosevelt broke up the trusts.” >> Bannon attacked the executives of Facebook, Twitter and Google. “These are run by sociopaths,” he said. “These people are complete narcissists. These people ought to be controlled, they ought to be regulated.” At one point during the phone call, Bannon said, “These people are evil. There is no doubt about that.” >> Bannon said he thinks “this is going to be a massive issue” in future elections. He said he thinks it will probably take until 2020 to fully blossom as a campaign issue, explaining, “I think by the time 2020 comes along, this will be a burning issue. I think this will be one of the biggest domestic issues.”

This is now Trump’s playbook. It’s incredibly frightening because, once married up with Trump’s accusations of election fraud and other imagined conspiracies, you can sense how he’s laying the groundwork to call into question future election results by suggesting that both traditional media and modern digital media platforms are just in bed with the Democratic party and trying to rig the presidential election. I don’t really want to think about what happens if this situation escalates to that point. These are very dark days for the American Republic.

]]>
https://techliberation.com/2020/05/28/the-surprising-ideological-origins-of-trumps-communications-collectivism/feed/ 0 76742
Panicking About 5G is a Celebrity Trend You Shouldn’t Follow https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/ https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/#respond Wed, 13 May 2020 14:00:03 +0000 https://techliberation.com/?p=76728

The COVID-19 pandemic has shown how important technology is for enabling social distancing measures while staying connected to friends, family, school, and work. But for some, including a number of celebrities, it has also heightened fears of emerging technologies that could further improve our connectivity. The latest technopanic should not make us fear technology that has added so much to our lives and that promises to help us even more.

Celebrities such as Keri Hilson, John Cusack, and Woody Harrelson have repeated concerns about 5G—from how it could be weakening our immune systems to even causing this pandemic. These claims about 5G have gotten serious enough that Google banned ads with misleading health information regarding 5G, and Twitter has stated it will remove tweets with 5G and health misinformation that could potentially cause harm in light of the COVID-19 pandemic. 5G is not causing the current pandemic, nor has it been linked to other health concerns. As the director of American Public Health Association Dr. Georges C. Benjamin has stated, “COVID-19 is caused by a virus that came through a natural animal source and has no relation to 5G, or any radiation linked to technology.”  As the New York Times has pointed out, much of the non-COVID-19 5G health concerns originated from Russian propaganda news source RT or trace back to a single decades-old flawed study. In short, there is no evidence to support many of the outrageous health claims regarding 5G.

New technologies have often faced unfounded concerns about their potential risks. In the late 19 th and early 20th centuries, many people feared electricity in the home was making people tired and weak (similar to the health claims about 5G today). More recently, many were concerned that technologies such as microwave ovens and cell phones might cause cancer or other health issues, but studies have proved that these worst fears have little grounding in science.

Some of these fears are based on misunderstandings of how technology works or confusion over similar but distinct technologies. For example, in the case of concerns about cell phones and cancer, the fears may stem from misunderstandings about the differences between ionizing and non-ionizing radiation. In a time of uncertainty, we may want to rush to maintain the status quo. But any number of innovations such as the radio, trains, or cars that were once feared have themselves become part of the status quo.

Why does it matter if some people are afraid of new technologies? While it is completely rational to want to avoid catastrophic and irreversible harms, unfounded fears can risk delaying important and beneficial technologies. For example, work by Linda Simon suggests that the exaggerated claims and fears of electricity’s impact on health may have slowed its adoption. While all technologies carry some risks, can we imagine all that might have been lost if we had listened to those trying to convince us to avoid electricity out of an abundance of caution? we may laugh about fears of electricity and not understanding its benefits, we still see extreme reactions out of fear of new technology, such as recent attempts to burn 5G towers in the United Kingdom because of misinformation about the health risks.

The recent pandemic should remind why constantly improving connectivity and internet infrastructure has been beneficial. As more of us are working from home and have an increased number of connected devices, 5G will increase network capacity and enable faster download speeds. These improvements also play a key role in the development of a number of emerging technologies from smart home devices and virtual reality to driverless cars and remote surgery.

The problem is not in individual choices to avoid a specific technology, but rather how such technopanics can impact broader adoption of beneficial technologies and innovation-friendly public policies. The good news is policymakers recognize the importance of policies that enable 5G and are also informing the public on the facts about wireless technology and health. During the COVID-19 pandemic, the Federal Communications Commission has continued to pursue policies that can improve connectivity, including for advancements toward 5G.

While we may want to follow celebrity trends when it comes to the latest fashion or TikTok dances, we should only let them scare us in the movies and not when it comes to 5G. If we only focus on the most outrageous and unfounded claims, our fear might distract us too much to see its benefits.

]]>
https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/feed/ 0 76728
The use of technology in COVID-19 public health surveillance https://techliberation.com/2020/04/21/the-use-of-technology-in-covid-19-public-health-surveillance/ https://techliberation.com/2020/04/21/the-use-of-technology-in-covid-19-public-health-surveillance/#respond Tue, 21 Apr 2020 16:29:33 +0000 https://techliberation.com/?p=76689

The recently-passed CARES Act included $500 million for the CDC to develop a new “surveillance and data-collection system” to monitor the spread of COVID-19.

There’s a fierce debate about how to use technology for health surveillance for the COVID-19 crisis. Unfortunately this debate is happening in realtime as governments and tech companies try to reduce infection and death while complying with national laws and norms related to privacy.

Technology has helped during the crisis and saved lives. Social media, chat apps, and online forums allow doctors, public health officials, manufacturers, entrepreneurs, and regulators around the world to compare notes and share best practices. Broadband networks, Zoom, streaming media, and gaming make stay-at-home order much more pleasant and keeps millions of Americans at work, remotely. Telehealth apps allow doctors to safely view patients with symptoms. Finally, grocery and parcel delivery from Amazon, Grubhub, and other app companies keep pantries full and serve as a lifeline to many restaurants.

The great tech successes here, however, will be harder to replicate for contact tracing and public health surveillance. Even the countries that had the tech infrastructure somewhat in place for contact tracing and public health surveillance are finding it hard to scale. Privacy issues are also significant obstacles. (On the Truth on the Market blog, FTC Commissioner Christine Wilson provides a great survey of how other countries are using technology for public health and analysis of privacy considerations. Bronwyn Howell also has a good post on the topic.) Let’s examine some of the strengths and weaknesses of the technologies.

Cell tower location information

Personal smartphones typically connect to the nearest cell tower, so a cell networks record (roughly) where a smartphone is at a particular time. Mobile carriers are sharing aggregated cell tower data with public health officials in Austria, Germany, and Italy for mobility information.

This data is better than nothing for estimating district- or region-wide stay-at-home compliance but the geolocation is imprecise (to the half-mile or so). 

Cell tower data could be used to enforce a virtual geofence on quarantined people. This data is, for instance, used in Taiwan to enforce quarantines. If you leave a geofenced area, public health officials receive an automated notification of your leaving home.

Assessment: Ubiquitous, scalable. But: rarely useful and virtually useless for contact tracing.

GPS-based apps and bracelets

Many smartphone apps passively transmit precise GPS location to app companies at all hours of the day. Google and Apple have anonymized and aggregated this kind of information in order to assess stay-at-home order effects on mobility. Facebook reportedly is also sharing similar location data with public health officials.

As Trace Mitchell and I pointed out in Mercatus and National Review publications, this information is imperfect but could be combined with infection data to categorize neighborhoods or counties as high-risk or low-risk. 

GPS data, before it’s aggregated by the app companies for public view, reveals precisely where people are (within meters). Individual data is a goldmine for governments, but public health officials will have a hard time convincing Americans, tech companies, and judges they can be trusted with the data.

It’s an easier lift in other countries where trust in government is higher and central governments are more powerful. Precise geolocation could be used to enforce quarantines.

Hong Kong, for instance, has used GPS wristbands to enforce some quarantines. Tens of thousands of Polish residents in quarantines must download a geolocation-based app and check in, which allows authorities to enforce quarantine restrictions. It appears the most people support the initiative.

Finally, in Iceland, one third of citizens have voluntarily downloaded a geolocation app to assist public officials in contact tracing. Public health officials call or message people when geolocation records indicate previous proximity with an infected person. WSJ journalists reported on April 9 that:

If there is no response, they send a police squad car to the person’s house. The potentially infected person must remain in quarantine for 14 days and risk a fine of up to 250,000 Icelandic kronur ($1,750) if they break it.

That said, there are probably scattered examples of US officials using GPS for quarantines. Local officials in Louisville, Kentucky, for example, are requiring some COVID-19-positive or exposed people to wear GPS ankle monitors to enforce quarantine.

Assessment: Aggregated geolocation information is possibly useful for assessing regional stay-at-home norms. Individual geolocation information is not precise enough for effective contact tracing. It’s probably precise and effective for quarantine enforcement. But: individual geolocation is invasive and, if not volunteered by app companies or users, raises significant constitutional issues in the US.

Bluetooth apps

Many researchers and nations are working on or have released some type of Bluetooth app for contact tracing. This includes Singapore, the Czech Republic, Britain, Germany, Italy and New Zealand.  

For people who use these apps, Bluetooth runs in the background, recording other Bluetooth users nearby. Since Bluetooth is a low-power wireless technology, it really only can “see” other users within a few meters. If you use the app for awhile and later test positive for infection, you can register your diagnosis. The app will then notify (anonymously) everyone else using the app, and public health officials in some countries, who you came in contact with in the past several days. My colleague Andrea O’Sullivan wrote a great piece in Reason about contact tracing using Bluetooth.

These apps have benefits over other forms of public health tech surveillance: they are more precise than geolocation information and they are voluntary.

The problem is that, unlike geolocation apps, which have nearly 100% penetration with smartphone users, Bluetooth contact tracing apps have about 0% penetration in the US today. Further, these app creators, even governments, don’t seem to have the PR machine to gain meaningful public adoption. In Singapore, for instance, adoption is reportedly only 12% of the population, which is way too low to be very helpful.

A handful of institutions in the world could get appreciable use of Bluetooth contact tracing: telecom and tech companies have big ad budgets and they own the digital real estate on our smartphones.

Which is why the news that Google and Apple are working on a contact tracing app is noteworthy. They have the budget and ability to make their hundreds of millions of Android and iOS users aware of the contact tracing app. They could even go so far as push a notification to the home screen to all users encouraging them to use it.

However, I suspect they won’t push it hard. It would raise alarm bells with many users. Further, as Dan Grover stated a few weeks ago about why US tech companies haven’t been as active as Chinese tech companies in using apps to improve public education and norms related to COVID-19:

Since the post-2016 “techlash”, tech companies in Silicon Valley have acted with a sometimes suffocating sense of caution and unease about their power in the world. They are extremely careful to not do anything that would set off either party or anyone with ideas about regulation. And they seldom use their pixel real estate towards affecting political change.

[Ed.: their puzzling advocacy of Title II “net neutrality” regulation a big exception].

Techlash aside, presumably US companies also aren’t receiving the government pressure Chinese companies are receiving to push public health surveillance apps and information. [Ed.: Bloomberg reports that France and EU officials want the Google-Apple app to relay contact tracing notices to public health officials, not merely to affected users. HT Eli Dourado]

Like most people, I have mixed feelings about how coercive the state and how pushy tech companies should be during this pandemic. A big problem is that we still have only an inkling about how deadly COVID-19 is, how quickly it spreads, and how damaging stay-at-home rules and norms are for the economy. Further, contact-tracing apps still need extensive, laborious home visits and follow-up from public health officials to be effective–something the US has shown little ability to do.

There are other social costs to widespread tech-enabled tracing. Tyler Cowen points out in Bloomberg that contact tracing tech is likely inevitable, but that would leave behind those without smartphones. That’s true, and a major problem for the over-70 crowd, who lack smartphones as a group and are most vulnerable to COVID-19.

Because I predict that Apple and Google won’t push the app hard and I doubt there will be mandates from federal or state officials, I think there’s only a small chance (less than 15%) a contact tracing wireless technology will gain ubiquitous adoption this year (60% penetration, more than 200 million US smartphone users). 

Assessment: A Bluetooth app could protect privacy while, if volunteered, giving public health officials useful information for contact tracing. However, absent aggressive pushes from governments or tech companies, it’s unlikely there will be enough users to significantly help.

Health Passport

The chances of mass Bluetooth app use would increase if the underlying tech or API is used to create a “health passport” or “immunity passport”–a near-realtime medical certification that someone will not infect others. Politico reported on April 10 that Dr. Anthony Fauci, the White House point man on the pandemic, said the immunity passport idea “has merit.”

It’s not clear what limits Apple and Google will put on their API but most APIs can be customized by other businesses and users. The Bluetooth app and API could feed into a health passport app, showing at a glance whether you are infected or you’d been near someone infected recently.

For the venues like churches and gyms and operators like airlines and cruise ships that need high trust from participants and customers, on the spot testing via blood test or temperature taking or Bluetooth app will likely gain traction. 

There are the beginnings of a health passport in China with QR codes and individual risk classifications from public health officials. Particularly for airlines, which is a favored industry in most nations, there could be public pressure and widespread adoption of a digital health passport. Emirates Airlines and the Dubai Health Authority, for instance, last week required all passengers on a flight to Tunisia to take a COVID-19 blood test before boarding. Results came in 10 minutes.

Assessment: A health passport integrates several types of data into a single interface. The complexity makes widespread use unlikely but it could gain voluntary adoption by certain industries and populations (business travelers, tourists, nursing home residents).

Conclusion

In short, tech could help with quarantine enforcement and contact tracing, but there are thorny questions of privacy norms and it’s not clear US health officials have the ability to do the home visits and phone calls to detect spread and enforce quarantines. All of these technologies have issues (privacy or penetration or testing) and there are many unknowns about transmission and risk. The question is how far tech companies, federal and state law officials, the American public, and judges are prepared to go.

]]>
https://techliberation.com/2020/04/21/the-use-of-technology-in-covid-19-public-health-surveillance/feed/ 0 76689
GPS location data and COVID-19 response https://techliberation.com/2020/03/20/gps-location-data-and-covid-19-response/ https://techliberation.com/2020/03/20/gps-location-data-and-covid-19-response/#respond Fri, 20 Mar 2020 20:07:38 +0000 https://techliberation.com/?p=76679

I saw a Bloomberg News report that officials in Austria and Italy are seeking (aggregated, anonymized) users’ location data from cellphone companies to see if local and national lockdowns are effective.

It’s an interesting idea that raises some possibilities for US officials and tech companies to consider to combat the crisis in the US. Caveat: these are very preliminary thoughts.

Cellphone location data from a phone company is OK but imprecise about your movements. It can show where you are typically in a mile or half-mile area. 

But smartphone app location is much more precise since it uses GPS, not cell towers to show movements. Apps with location services can show people’s movements within meters, not half-mile, like cell towers.I suspect 90%+ of smartphone users have GPS location services on (Google Maps, Facebook, Yelp, etc.). App companies have rich datasets of daily movements of people.

Step 1 – App companies isolate and share location trends with health officials

This would need to be aggregated and anonymized of course. Tech companies with health officials should, as Balaji Srinivasan says, identify red and green zones. The point is not to identify individuals but make generalizations about whether a neighborhood or town is practicing good distancing practices.

https://platform.twitter.com/widgets.js

Step 2 – In green zones, where infection/hospitalization are low and app data says people are strictly distancing, COVID-19 tests.

If people are spending 22 hours not moving except for brief visits to the grocery store and parks, that’s a good neighborhood. We need tests distributed daily in non-infected areas, perhaps at grocery stores and via USPS and Amazon deliveries. As soon as the tests production ramps up, tests need to flood into the areas that are healthy. This achieves two things:

  • Asymptomatic people who might spread can stay home.
  • Non-infected people can start returning to work and a life of semi-normalcy of movement with confidence that others who are out are non-contagious.

Step 3 – In red zones, where infection/hospitalization is high and people aren’t strictly distancing, public education and restrictions.

At least in Virginia, there is county-level data about where the hotspots are. I expect other states know the counties and neighborhoods that are hit hard. Where there’s overlap of these areas not distancing, step up distancing and restrictions.

That still leaves open what to do about yellow zones that are adjacent to red zones, but the main priority should be to identify the green and red. The longer health officials and the public are flying blind with no end in sight, people get frustrated, lose jobs, shutter businesses, and violate distancing rules.

]]>
https://techliberation.com/2020/03/20/gps-location-data-and-covid-19-response/feed/ 0 76679
Remote Work and the State of US Broadband https://techliberation.com/2020/03/12/remote-work-and-the-state-of-us-broadband/ https://techliberation.com/2020/03/12/remote-work-and-the-state-of-us-broadband/#respond Thu, 12 Mar 2020 18:36:53 +0000 https://techliberation.com/?p=76677

To help slow the spread of the coronavirus, the GMU campus is moving to remote instruction and Mercatus is moving to remote work for employees until the risk subsides. GMU and Mercatus employees join thousands of other universities and businesses this week. Millions of people will be working from home and it will be a major test of American broadband and cellular networks. 

There will likely be a loss of productivity nationwide–some things just can’t be done well remotely. But hopefully broadband access is not a major issue. What is the state of US networks? How many people lack the ability to do remote work and remote homework?

The FCC and Pew research keep pretty good track of broadband buildout and adoption. There are many bright spots but some areas of concern as well.

Who lacks service?

The top question: How many people want broadband but lack adequate service or have no service?

The good news is that around 94% of Americans have access to 25 Mbps landline broadband. (Millions more have access if you include broadband from cellular and WISP providers.) It’s not much consolation to rural customers and remote workers who have limited or no options, but these are good numbers.

According to Pew’s 2019 report, about 2% of Americans cite inadequate or no options as the main reason they don’t have broadband. What is concerning is that this 2% number hasn’t budged in years. In 2015, about the same number of Americans cited inadequate or no options as the main reason they didn’t have home broadband. This resembles what I’ve called “the 2% problem“–about 2% of the most rural American households are extremely costly to serve with landline broadband. Satellite, cellular, or WISP service will likely be the best option.

Mobile broadband trends

Mobile broadband is increasingly an option for home broadband. About 24% of Americans with home Internet are mobile only, according to Pew, up from ~16% in 2015.

The ubiquity of high-speed mobile broadband has been the big story in recent years. Per FCC data, from 2009 to 2017 (the most recent year we have data), the average number of new mobile connections increased about 30 million annually. In Dec. 2017 (the most recent data), there were about 313 million mobile subscriptions.

Coverage is very good in the US. OpenSignal uses crowdsourced data and software to determine how frequently users’ phones have a 4G LTE network available (a proxy for coverage and network quality) around the world. The US ranked fourth the world (86%) in 2017, beating out every European country, save Norway.

There was also a big improvement was in mobile speeds. In 2009, a 3G world, almost all connections were below 3 Mbps. In 2017, a world of 4G LTE, almost all connections were above 3 Mbps.

Landline broadband trends

Landline broadband also increased significantly. From 2009 to 2017, there were about 3.5 million new connections per year, about 108 million connections in 2017. In Dec. 2009, about half of landline connections were below 3 Mbps.

There were some notable jumps in high-speed and rural broadband deployment. There was a big jump in fiber-to-the-premises (FTTP) connections, like FiOS and Google Fiber. From 2012 to 2017, the number of FTTP connections more than doubled, to 12.6 million. Relatedly, sub-25 Mbps connections have been falling rapidly while 100 Mbps+ connections have been shooting up. In 2017, there were more connections with 100 Mbps+ (39 million) than there were connections below 25 Mbps (29 million).

In the most recent 5 years for which we have data, the number of rural subscribers (not households) with 25 Mbps increased 18 million (from 29 million to 47 million).

More Work

We only have good data for the first year of the Trump FCC, so it’s hard to evaluate but signs are promising. One of Chairman Pai’s first actions was creating an advisory committee to advise the FCC on broadband deployment (I’m a member). Anecdotally, it’s been fruitful to regularly have industry, academics, advocates, and local officials in the same room to discuss consensus policies. The FCC has acted on many of those.

The rollback of common carrier regulations for the Internet, the pro-5G deployment initiatives, and limiting unreasonable local fees for cellular equipment have all helped increase deployment and service quality.

An effective communications regulator largely stays of the way and removes hindrances to private sector investment. But the FCC does manage some broadband subsidy programs. The Trump FCC has made some improvements to the $4.5 billion annual rural broadband programs. The 17 or so rural broadband subprograms have metastasized over the years, making for a kludgey and expensive subsidy system.

The recent RDOF reforms are a big improvement since they fund a reverse auction program to shift money away from the wasteful legacy subsidy programs. Increasingly, rural households get broadband from WISP, satellite, and rural cable companies–the RDOF reforms recognize that reality.

Hopefully one day reforms will go even further and fund broadband vouchers. It’s been longstanding FCC policy to fund rural broadband providers (typically phone companies serving rural areas) rather than subsidizing rural households. The FCC should consider a voucher model for rural broadband, $5 or $10 or $40 per household per month, depending on the geography. Essentially the FCC should do for rural households what the FCC does for low-income households–provide a monthly subsidy to make broadband costs more affordable.

Many of these good deployment trends began in the Obama years but the Trump FCC has made it a national priority to improve broadband deployment and services. It appears to be be working. With the coronavirus and a huge increase in remote work, US networks will be put to a unique test.

]]>
https://techliberation.com/2020/03/12/remote-work-and-the-state-of-us-broadband/feed/ 0 76677
Impressions from the DOJ Workshop about Section 230 https://techliberation.com/2020/02/26/impressions-from-the-doj-workshop-about-section-230/ https://techliberation.com/2020/02/26/impressions-from-the-doj-workshop-about-section-230/#respond Wed, 26 Feb 2020 18:54:26 +0000 https://techliberation.com/?p=76670

Last week I attended the Section 230 cage match workshop at the DOJ. It was a packed house, likely because AG Bill Barr gave opening remarks. It was fortuitous timing for me: my article with Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation, was published 24 hours before the workshop by the Oklahoma Law Review.

These were my impressions of the event:

I thought it was pretty well balanced event and surprisingly civil for such a contentious topic. There were strong Section 230 defenders and strong Section 230 critics, and several who fell in between. There were a couple cheers after a few pointed statements from panelists, but the audience didn’t seem to fall on one side or the other. I’ll add that my friend and co-blogger Neil Chilson gave an impressive presentation about how Section 230 helped make the “long tail” of beneficial Internet-based communities possible.

AG Bob Barr gave the opening remarks, which are available online. A few things jumped out. He suggested that Section 230 had its place but Internet companies are not an infant industry anymore. In his view, the courts have expanded Section 230 beyond drafters’ intent, and the Reno decision “unbalanced” the protections, which were intended to protect minors. The gist of his statement was that the law needs to be “recalibrated.”

Each of these points were disputed by one or more panelists, but the message to the Internet industry was clear: the USDOJ is scrutinizing industry concentration and its relationship to illegal and antisocial online content.

The workshop signals that there is now a large, bipartisan coalition that would like to see Section 230 “recalibrated.” The problem for this coalition is that they don’t agree on what types of content providers should be liable for and they are often at cross-purposes. The problematic content ranges from sex trafficking, to stalkers, to opiate trafficking, to revenge porn, to unfair political ads. For conservatives, social media companies take down too much content, intentionally helping progressives. For progressives, social media companies leave up too much content, unwittingly helping conservatives.

I’ve yet to hear a convincing way to modify Section 230 that (a) satisfies this shaky coalition, (b) would be practical to comply with, and (c) would be constitutional.

Now, Section 230 critics are right: the law blurs the line between publisher and conduit. But this is not unique to Internet companies. The fact is, courts (and federal agencies) blurred the publisher-conduit dichotomy for fifty years for mass media distributors and common carriers as technology and social norms changed. Some cases that illustrate the phenomenon:

In Auvil v. CBS 60 Minutes, a 1991 federal district court decision, some Washington apple growers sued some local CBS affiliates for airing allegedly defamatory programming. The federal district court dismissed the case on the grounds that the affiliates are conduits of CBS programming. Critically, the court recognized that the CBS affiliates “had the power to” exercise editorial control over the broadcast and “in fact occasionally [did] censor programming . . . for one reason or another.” Still, case dismissed. The principle has been cited by other courts. Publishers can be conduits.

Conduits can also be publishers. In 1989, Congress passed a law requiring phone providers to restrict “dial-a-porn” services to minors. Dial-a-porn companies sued. In Information Providers Coalition v. FCC, the 9th Circuit Court of Appeals held that regulated common carriers are “free under the Constitution to terminate service” to providers of indecent content. The Court relied on its decision a few years earlier in Carlin Communications noting that when a common carrier phone company is connecting thousands of subscribers simultaneously to the same content, the “phone company resembles less a common carrier than it does a small radio station.”

Many Section 230 reformers believe Section 230 mangled the common law would like to see the restoration of the publisher-conduit dichotomy. As our research shows, that dichotomy had already been blurred for decades. Until advocates and lawmakers acknowledge these legal trends and plan accordingly, the reformers risk throwing out the baby with the bathwater.

Relevant research:
Brent Skorup & Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation (Oklahoma Law Review).

Brent Skorup & Joe Kane, The FCC and Quasi–Common Carriage: A Case Study of Agency Survival (Minnesota Journal of Law, Science & Technology).

]]>
https://techliberation.com/2020/02/26/impressions-from-the-doj-workshop-about-section-230/feed/ 0 76670