Search Results for “tim Lee” – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 07 Jun 2023 12:41:49 +0000 en-US hourly 1 6772528 event video: “Does the US Need a New AI Regulator?” https://techliberation.com/2023/06/07/event-video-does-the-us-need-a-new-ai-regulator/ https://techliberation.com/2023/06/07/event-video-does-the-us-need-a-new-ai-regulator/#comments Wed, 07 Jun 2023 12:41:49 +0000 https://techliberation.com/?p=77129

Here’s the video from a June 6th event on, “Does the US Need a New AI Regulator?” which was co-hosted by Center for Data Innovation & R Street Institute. We discuss algorithmic audits, AI licensing, an “FDA for algorithms” and other possible regulatory approaches, as well as various “soft law” self-regulatory efforts and targeted agency efforts. The event was hosted by Daniel Castro and included Lee Tiedrich, Shane Tews, Ben Shneiderman and me.

Additional Reading :

]]>
https://techliberation.com/2023/06/07/event-video-does-the-us-need-a-new-ai-regulator/feed/ 1 77129
Studies Document Growing Cost of EU Privacy Regulations https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/ https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/#comments Thu, 09 Feb 2023 16:22:47 +0000 https://techliberation.com/?p=77086

[Originally published on Medium on 2/5/2022]

In an earlier essay, I explored “Why the Future of AI Will Not Be Invented in Europe” and argued that, “there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it.” This essay summarizes some of the major academic literature that leads to that conclusion.

Since the mid-1990s, the European Union has been layering on highly restrictive policies governing online data collection and use. The most significant of the E.U.’s recent mandates is the 2018 General Data Protection Regulation (GDPR). This regulation established even more stringent rules related to the protection of personal data, the movement thereof, and limits what organizations can do with data. Data minimization is the major priority of this system, but there are many different types of restrictions and reporting requirements involved in the regulatory scheme. This policy framework also has ramifications for the future of next-generation technologies, especially artificial intelligence and machine learning systems, which rely on high-quality data sets to improve their efficacy.

Whether or not the E.U.’s complicated regulatory regime has actually resulted in truly meaningful privacy protections for European citizens relative to people in other countries remains open to debate. It is very difficult to measure and compare highly subjective values like privacy across countries and cultures. This makes benefit-cost analysis for privacy regulation extremely challenging — especially on the benefits side of the equation.

What is no longer up for debate, however, is the cost side of the equation and the question of what sort of consequences the GDPR has had on business formation, competition, investment, and so on. On these matters, standardized metrics exist and the economic evidence is abundantly clear: the GDPR has been a disaster for Europe.

Summary of Major Studies on Impact of EU Data Regulation

Consider the impact of E.U. data controls on business startups and market structure. GDPR and other regulations greatly limit the flow of data to innovative upstarts who need it most to compete, leaving only the largest companies who can afford to comply to control most of the market. Benjamin Mueller of ITIF notes that it is already the case that just “two of the world’s 30 largest technology firms by market capitalization are from the EU,” and only “5 of the 100 most promising AI startups are based in Europe,” while private funding of AI startups in Europe for 2020 ($4 billion) was dwarfed by US ($36 billion) and China ($25 billion). These issues are even more pressing as the E.U. looks to advance a new AI Act, which would layer on still more regulatory restrictions.

In concrete terms, this has meant that the E.U. came away from the digital revolution with “the complete absence of superstar companies,” argue competition policy experts Nicolas Petit and David Teece. There are no European versions of Microsoft, Google, or Apple, even though Europeans clearly demand the sort of products and services those US-based companies provide. Entrepreneurialism scholar Zoltan Acs asks: “What has been the outcome of E.U. policy in limiting entrepreneurial activity over recent decades?” His conclusion:

It is immediately clear… that the United States and China dominate the platform landscape. Based on the market value of top companies, the United States alone represents 66% of the world’s platform economy with 41 of the top 100 companies. European platform-based companies play a marginal role, with only 3% of market value.

Several recent studies have documented the costs associated with the GDPR and the E.U.’s heavy-handed approach to data flows more generally. Here is a rundown of some of the academic evidence and a summary of the major findings from these studies.

“There is a growing body of economic literature and commentary showing that the costs of implementing the GDPR benefit large online platforms, and that consent-based data collection gives a competitive advantage to firms offering a range of consumer-facing products compared to smaller market actors. This in turn increases concentration in a number of digital markets where access to data is important, by creating barriers to entry or encouraging market exit.” (p. 2–3)

“this paper examines how privacy regulation shaped firm performance in a large sample of companies across 61 countries and 34 industries. Controlling for firm and country-industry-year unobserved characteristics, we compare the outcomes of firms at different levels of exposure to EU markets, before and after the enforcement of the GDPR in 2018. We find that enhanced data protection had the unintended consequence of reducing the financial performance of companies targeting European consumers. Across our full sample, firms exposed to the regulation experienced a 8% decline in profits, and a 2% reduction in sales. An exception is large technology companies, which were relatively unaffected by the regulation on both performance measures. Meanwhile, we find the negative impact on profits among small technology companies to be almost double the average effect across our full sample. Following several robustness tests and placebo regressions, we conclude that the GDPR has had significant negative impacts on firm performance in general, and on small companies in particular.” (p. 1)

“We show that websites’ vendor use falls after the European Union’s General Data Protection Regulation (GDPR), but that market concentration also increases among technology vendors that provide support services to websites. We collect panel data on the web technology vendors selected by more than 27,000 top websites internationally. The week after the GDPR’s enforcement, website use of web technology vendors falls by 15% for EU residents. Websites are more likely to drop smaller vendors, which increases the relative concentration of the vendor market by 17%. Increased concentration predominantly arises among vendors that use personal data such as cookies, and from the increased relative shares of Facebook and Google-owned vendors, but not from website consent requests. Though the aggregate changes in vendor use and vendor concentration dissipate by the end of 2018, we find that the GDPR impact persists in the advertising vendor category most scrutinized by regulators. Our findings shed light on potential explanations for the sudden drop and subsequent rebound in vendor usage.” (p. 1)

GDPR creates inherent tradeoffs between data protection and other dimensions of welfare, including competition and innovation. While some of these effects were acknowledged when constructing the legal data regime, many were disregarded. Furthermore, the magnitude and breadth of such effects may well constitute an unintended and unheeded welfare-reducing consequence. As this article shows, the GDPR limits competition and increases concentration in data and data-related markets, and potentially strengthens large data controllers. It also further reinforces the already existing barriers to data sharing in the EU, thereby potentially reducing data synergies that might result from combining different datasets controlled by separate entities.” (pp. 3–4)

“Using data on 4.1 million apps at the Google Play Store from 2016 to 2019, we document that GDPR induced the exit of about a third of available apps; and in the quarters following implementation, entry of new apps fell by half. We estimate a structural model of demand and entry in the app market. Comparing long-run equilibria with and without GDPR, we find that GDPR reduces consumer surplus and aggregate app usage by about a third. Whatever the privacy benefits of GDPR, they come at substantial costs in foregone innovation.”

“this paper empirically quantifies the effects of the enforcement of the EU’s General Data Protection Regulation (GDPR) on online user behavior over time, analyzing data from 6,286 websites spanning 24 industries during the 10 months before and 18 months after the GDPR’s enforcement in 2018. A panel differences estimator, with a synthetic control group approach, isolates the short- and long-term effects of the GDPR on user behavior. The results show that, on average, the GDPR’s effects on user quantity and usage intensity are negative; e.g., the numbers of total visits to a website decrease by 4.9% and 10% due to GDPR in respectively the short- and long-term. These effects could translate into average revenue losses of $7 million for e-commerce websites and almost $2.5 million for ad-based websites 18 months after GDPR. The GDPR’s effects vary across websites, with some industries even benefiting from it; moreover, more-popular websites suffer less, suggesting that the GDPR increased market concentration.”

“This paper investigates the impact of the General Data Protection Regulation (GDPR for short) on consumers’ online browsing and search behavior using consumer panels from four countries, United Kingdom, Spain, United States, and Brazil. We find that after GDPR, a panelist exposed to GDPR submits 21.6% more search terms to access information and browses 16.3% more pages to access consumer goods and services compared to a non-exposed panelist, indicating higher friction in online search. The implications of increased friction are heterogeneous across firms: Bigger e-commerce firms see an increase in consumer traffic and more online transactions. The increase in the number of transactions at large websites is about 6 times the increase experienced by smaller firms. Overall, the post-GDPR online environment may be less competitive for online retailers and may be more difficult for EU consumers to navigate through.”

“Privacy regulations should increase trust because they provide laws that increase transparency and allow for punishment in cases in which the trustee violates trust. […] We collected survey panel data in Germany around the implementation date and ran a survey experiment with a GDPR information treatment. Our observational and experimental evidence does not support the hypothesis that the GDPR has positively affected trust. This finding and our discussion of the underlying reasons are relevant for the wider research field of trust, privacy, and big data.”

“We follow more than 110,000 websites and their third-party HTTP requests for 12 months before and 6 months after the GDPR became effective and show that websites substantially reduced their interactions with web technology providers. Importantly, this also holds for websites not legally bound by the GDPR. These changes are especially pronounced among less popular websites and regarding the collection of personal data. We document an increase in market concentration in web technology services after the introduction of the GDPR: Although all firms suffer losses, the largest vendor — Google — loses relatively less and significantly increases market share in important markets such as advertising and analytics. Our findings contribute to the discussion on how regulating privacy, artificial intelligence and other areas of data governance relate to data minimization, regulatory competition, and market structure.”

William Rinehart of the Center for Growth and Opportunity has compiled and summarized many additional studies that document the costs associated with restrictions on data, including many state privacy laws imposed in the United States.

“The Biggest Loser”: Innovation Culture Gone Wrong

Taken together, this evidence makes it clear that, “Well-meaning privacy laws can have the unintended consequence of penalizing smaller companies within technology markets.” It can also have broader geopolitical ramifications for continental competitive advantage and engagement between countries. Some have argued that the United Kingdom’s so-called “Brexit” from the EU can be viewed as not only an effort to reclaim its sovereignty but more specifically “to escape its crippling regulatory structure.” The E.U.’s approach to emerging technology regulation likely had some bearing on this. Acs argues that Britain’s move was logical, “because E.U. regulations were holding back the U.K.’s strong DPE (digital platform economy).” “If the United Kingdom was to realize its economic potential,” he says, “it had to extricate itself from the European Union,” due to the growing “dysfunctional E.U. bureaucracy.”

Can Europe turn things around? Most market watchers do not believe that the E.U. will be willing to change its regulatory course in such a way that the continent would suddenly become more open to data-driven innovation. As part of a Spring 2022 journal symposium, The International Economy asked 11 experts from Europe and the U.S. to consider where the European Union currently stood in “the global tech race.” The responses were nearly unanimous and bluntly summarized in the symposium’s title: “The Biggest Loser.” Several respondents observed how “Europe is considered to be lagging behind in the global tech race,” and “is unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another respondent concluded. Europe’s risk-averse culture and preference for meticulously detailed and highly precautionary regulatory regimes were repeatedly cited as factors.

Europe has become the biggest loser on the digital technology front not because of their people but because of their policy. Europe is filled with some of the most important advanced education and engineering programs in the world, and countless brilliant minds there could be leading world-leading digital technology companies that could rival the U.S., China, and the rest of the world. But Europe’s current “innovation culture” simply will not allow it.

Innovation culture refers to “the various social and political attitudes and pronouncements towards innovation, technology, and entrepreneurial activities that, taken together, influence the innovative capacity of a culture or nation.” A positive innovation culture depends upon a dynamic, open economy that encourages new entry, entrepreneurialism, continuous investment, and the free movement of goods, ideas, and talent.

At this point in time, it is clear that — at least for data-driven sectors — the E.U. has created the equivalent of an anti-innovation culture, and the GDPR has clearly played a major rule in that outcome. This regulatory regime has also had devastating consequences for venture capital formation and investment more generally in Europe. “Public policy and attitudes explain the relative technological decline and lack of economic dynamism,” Petit and Teece argue, and it has resulted in, “weak venture capital markets, fragmented research capabilities, low worker mobility and frustrated entrepreneurs.”

Industrial Policy Won’t Save Europe

While the E.U. is aggressively regulating data-driven sectors, it is simultaneously trying to use industrial policy programs to advance new technological capabilities and innovations. European policymakers would obviously like to avoid a repeat of the past quarter century and the lack of digital technology competition and innovation they witnessed.

But past European industrial policy efforts on the digital technology front have largely failed, as Connor Haaland and I documented earlier. Zoltan Acs notes that, despite many state efforts to promote digital innovation across the continent in recent decades, the E.U.’s regulatory policies have resulted in the opposite. “The European Union protected traditional industries and hoped that existing firms would introduce new technologies. This was a policy designed to fail,” he argues. A major recent book, Questioning the Entrepreneurial State: Status-quo, Pitfalls, and the Need for Credible Innovation Policy (Springer, 2022), offers additional evidence of the failure of European industrial policy efforts. No amount of industrial policy planning and spending is going to be able to overcome a negative innovation culture that suffocates entrepreneurialism and investment out of the gates.

These findings have lessons for policymakers in the United States, too, especially with President Biden and even many Republicans now calling for heavy-handed top-down regulation of digital technology companies. Basically, “President Biden Wants America to Become Europe on Tech Regulation,” I argued in a recent R Street Institute blog post. In a letter to the Wall Street JournalI responded to recent opeds by both President Biden and former Trump Administration Attorney General William Barr in which they both advocated regulations that would take us down the disastrous path that the European Union has already charted.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.” This is the wrong innovation culture for the United States if we hope to be a leader in the Computational Revolution that is unfolding — and match expanding efforts by the Chinese to top us at it.

In closing, policymakers should never lose sight of the most fundamental lesson of innovation policy, which can be stated quite simply: You only get as much innovation as you allow to begin with. If the public policy defaults are all set to be maximally restrictive and limit entrepreneurialism and experimentation by design, then it should be no surprise when the country or continent fails to generate meaningful innovation, investment, new companies, and global competitive advantage. The European model is no model for America.

Additional reading:

]]>
https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/feed/ 2 77086
Self-Inflicted Technological Suicide https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/ https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/#comments Fri, 27 Jan 2023 00:26:11 +0000 https://techliberation.com/?p=77077

The Wall Street Journal has run my response to troubling recent opeds by President Biden (“Republicans and Democrats, Unite Against Big Tech Abuses“) and former Trump Administration Attorney General William Barr (“Congress Must Halt Big Tech’s Power Grab“) in which they both called for European-style regulation of U.S. digital technology markets.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “[T]he EU’s big-government regulatory crusade against digital tech: Stagnant markets, limited innovation and a dearth of major players. Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.”

Thus, the Biden and Barr plans for importing European-style tech mandates, “would be a stake through the heart of the ‘permissionless innovation’ that made America’s info-tech economy a global powerhouse.” In a longer response to the Biden oped that I published on the R Street blog, I note that:

“It is remarkable to think that after years of everyone complaining about the lack of bipartisanship in Washington, we might get the one type of bipartisanship America absolutely does not need: the single most destructive technological suicide in U.S. history, with mandates being substituted for markets, and permission slips for entrepreneurial freedom.”

What makes all this even more remarkable is that they calls for hyper-regulation come at a time when China is challenging America’s dominance in technology and AI. Thus, “new mandates could compromise America’s lead,” I conclude. “Shackling our tech sectors with regulatory chains will hobble our nation’s ability to meet global competition and undermine innovation and consumer choice domestically.”

Jump over to the WSJ to read my entire response (“EU-Style Regulation Begets EU-Style Stagnation“) and to the R Street blog for my longer essay (“President Biden Wants America to Become Europe on Tech Regulation“).

]]>
https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/feed/ 1 77077
Remembering the ‘Japan Inc.’ Industrial Policy Scare of the 1980s & 1990s https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/ https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/#respond Tue, 29 Jun 2021 16:12:22 +0000 https://techliberation.com/?p=76892

Discourse magazine has just published my latest essay, “‘Japan Inc.’ and Other Tales of Industrial Policy Apocalypse.” It is a short history of the hysteria surrounding the growth of Japan in the 1980s and early 1990s and its various industrial policy efforts. I begin by noting that, “American pundits and policymakers are today raising a litany of complaints about Chinese industrial policies, trade practices, industrial espionage and military expansion. Some of these concerns have merit. In each case, however, it is easy to find identical fears that were raised about Japan a generation ago.” I then walk through many of the leading books, opeds, movies, and other things from that past era to show how that was the case.

“Hysteria” is not too strong a word to use in this case. Many pundits and politicians were panicking about the rise of Japan economically and more specifically about the way Japan’s Ministry of International Trade and Industry (MITI) was formulating industrial policy schemes for industrial sectors in which they hoped to make advances. This resulted in veritable “MITI mania” here in America. “U.S. officials and market analysts came to view MITI with a combination of reverence and revulsion, believing that it had concocted an industrial policy cocktail that was fueling Japan’s success at the expense of American companies and interests,” I note. Countless books and essays were being published with breathless titles and predictions. I go through dozens of them in my essay. Meanwhile, the debate in policy circles and Capitol Hill even took on an ugly racial tinge, with some lawmakers calling the the Japanese “leeches.” and suggesting the U.S. should have dropped more atomic bombs on Japan during World War II. At one point, several members of Congress gathered on the lawn of the U.S. Capitol in 1987 to smash Japanese electronics with sledgehammers.

All this hysteria about Japan and MITI bore little semblance to reality. In fact, as I note in the essay, the MITI industrial planning model fell apart after it made a host of horrible bad bets and the stock market tanked in the late 1980s. Corruption also became a huge problem within many state-led efforts. A 2000 report by the Policy Research Institute within Japan’s Ministry of Finance concluded that “the Japanese model was not the source of Japanese competitiveness but the cause of our failure.” MITI was renamed the Ministry of Economy, Trade and Industry at about the same time, and its mission shifted more toward market-oriented reforms.

Industrial policy came to be viewed as a bit of a joke in America after that, but now it is back with a vengeance, thanks largely to the rise of Chinese economic power. Thus, because “we hear echoes from the Japan Inc. era debates in today’s policy discussions about China and industrial policy planning,” I end my essay with some lessons from the ‘Japan Inc.’ era for today’s industrial policy debates:

This similarity demonstrates the first lesson we can learn from the previous era: It is important to separate serious geopolitical and economic analysis from breathless fear-mongering and borderline xenophobia. The former has a serious place in policy discussions; the latter needs to be called out and shunned. After all, there are many legitimate worries about rising Chinese power, particularly when it involves Chinese Communist Party efforts to squash human rights domestically or to engage in industrial espionage, trade mercantilism and military adventurism abroad. Separating serious matters from trivial or imaginary ones is crucial, especially to help keep peace between nations. Avoiding hysteria is especially pertinent today with a wave of anti-Asian sentiment and attacks on the rise in the U.S. A second lesson from the Japan Inc. experience relates to today’s renewed interest in industrial policy: Forecasting the future of nations and economies—and trying to plan for it—is a tricky business. A huge range of variables affects global competitiveness and technological advancement. A nonexhaustive list of some of the most important factors would include legal and political stability, physical and intellectual property rights, tax burdens, competition policy, trade and investment laws, monetary policy, research and development efforts, and even demographic factors and access to certain natural resources. Understanding how these and other factors all work together is an inexact science. When targeted industrial policy mechanisms are added to the mix, it becomes even harder to untangle which variables are making the most difference. Both in the past and today, a less visible group of scholars has suggested that an embrace of entrepreneurialism and free trade was the fundamental factor driving Japanese economic expansion in the past and China’s amazing growth today. Openness to markets, they say, drove the enormous economic expansions—which also happened during times of much-needed catch-up modernization in both countries. But these perspectives have usually been shouted out of the room by louder voices, who either bombastically blast or praise industrial policy mechanisms as the prime mover in the economic rejuvenation of both nations. We need to tamp down on the magical thinking that governments can easily achieve technological innovation and economic growth by simply spinning a few industrial policy gauges. A few big bets may pay off, but that doesn’t justify governments engaging in casino economics regularly. History more often shows that grandiose industrial policy schemes simply result in cost overruns, cronyism and even corruption.

I also conclude by noting that:

Perhaps the most ironic indictment of industrial policy punditry lies in the way all the earlier books and essays about Japanese planning not only failed to forecast the many flops associated with it, but also did not foresee China as a potential future economic juggernaut. Korea, Singapore and Taiwan were mentioned as potential Asian challengers, but no one gave China much consideration. What might that tell us about the ability of experts to predict the future course of countries and economies? It is a reminder of the wisdom of another great Yogi Berra quote: “It’s tough to make predictions, especially about the future.”

You can read the entire piece, as well as several others listed below, over at Discourse.


Recent writing on industrial policy:
]]>
https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/feed/ 0 76892
A Return of the Trustbusters Could Harm Consumers https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/ https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/#comments Tue, 13 Apr 2021 17:01:58 +0000 https://techliberation.com/?p=76868

Is it a time for the return of the trustbusters? Some politicians seem to imply that today’s tech giants have become modern day robber barons taking advantage of the American consumer and, as a result, they argue that it is time for a return of aggressive antitrust enforcement and for dramatic changes to existing antitrust interpretations to address the concerns associated with today’s big business.

This criticism is not limited to one side of the aisle, with Senators Amy Klobuchar (D-MN) and Josh Hawley (R-MO) both proposing their own dramatic overhauls of antitrust laws and the House Judiciary Committee majority issuing a report that greatly criticizes the current technology market. In both cases these new proposals create presumptive bans on mergers for companies of certain size and lower the burdens on the government for intervening and proving its case. I have previously analyzed the potential impact of Senator Klobuchar’s proposal, and Senator Hawley’s proposal raises many similar concerns when it comes to its merger ban and shift away from existing objective standards.

Proponents on both sides of the aisle argue changing current antitrust standards is needed to fight big business, but sadly these modern-day trustbusters may not be the heroes they see themselves as. In fact, such a shift would harm American consumers and small businesses well beyond the tech sector.

The Trustbusters-Era Standards Would Fail Consumers

The original trustbusters of the late 19th and early 20th century created a system that was not always clear and could be abused by regulators subjectively determining what was and was not anti-competitive behavior. The result was that, in this earlier era, businesses and consumers could never be certain what behaviors would be considered violations.

The shift to the consumer welfare standard helped fixed that problem by providing an objective framework using economic analysis to weigh the risk and benefits of behavior and judging it based on its impact on consumers and not specific competitors. Unfortunately, these new proposals would shift away from this objective focus and return to a presumption that big is bad. This shift would be bad news not only for big business but for smaller businesses and consumers as well. Small businesses would lose an important exit strategy option with the presumptive ban on mergers with large companies, and consumers would miss out on benefits such as price reductions, improvements, and innovations that these mergers could bring.

While much of the debate around antitrust changes focuses on large tech firms such as Google, Apple, Facebook, and Amazon, changing antitrust laws would impact far more of the economy than just tech. Both the Hawley and Klobuchar proposals would bar mergers unless there is strong evidence proving their value (a “regulatory presumption” against mergers), but this presumption would impact industries such as pharmaceuticals, finance, and agriculture that also frequently have mergers and acquisitions that benefit consumers by helping to expand the distribution of a product or improve on an existing service. In fact, companies including L’Oreal and Nike could find any mergers or acquisitions presumptively prohibited under the limits in these proposals.

Existing Standards Can Adapt to Dynamic Markets Like Tech

Existing standards are still able to address the concerns associated with this dynamic and changing markets as well as more established markets. For example, the Antitrust Modernization Commission concluded, “There is no need to revise the antitrust laws to apply different rules to industries in which innovation, intellectual property, and technological change are central features.”

Sometimes regulators’ sense of the market in technology may prove to be wrong by the evolution of a technology or the disruption caused by a dramatic shift in the industry. For example, debates used to be focused on MySpace and AOL , which have now become things of internet nostalgia. Today’s tech giants are facing growing challenges not only from each other in many cases, but also from many newer entrants, from ClubHouse and TikTok to Zoom and Shopify. Removing the need to firmly establish the existing standards of an antitrust case would risk unnecessary intervention into the market or, more likely, could prevent actions that benefit consumers.

Some question whether this economic analysis-based standard can handle the zero-price services offered by many technology companies. While price is often the easiest focus, this standard also considers issues such as quality and innovation, making it elastic enough to address potential concerns even if the price is zero. Still, this does not mean that the definition of harm under the consumer welfare standard should be expanded to address any litany of concerns that cannot be objectively shown to have market harm.

Trustbusters’ Concerns with Tech Are Unlikely to Be Solved by Antitrust

Antitrust is also a poor tool to address concerns such as data privacy or content moderation, and using it to do so could allow for future abuse for other political ends. There is no guarantee that smaller companies would respond to existing market demands around issues such as content moderation any differently than the current large players. Additionally, when it comes to privacy and targeted advertising, smaller platforms would have to find new ways to gain revenue and might be forced to monetize the platform more to stay afloat without being able to rely on the revenue from a larger parent company. Finally, there is no guarantee that these smaller companies would be more innovative or dynamic particularly as existing teams and talents are divided by break ups and walls are erected to prevent entry into certain markets.

The good news is some policymakers have realized that these problems exist and argued for preserving the existing framework and addressing these other concerns with appropriately targeted policies. For example, Sen. Mike Lee recently defended the consumer welfare standard and was critical of the negative impact “radically alter[ing] our antitrust regime” could have while still questioning some recent decisions around content moderation.

Conclusion

Many have hoped for a return of bipartisan cooperation in Washington, but unfortunately bad ideas can also emerge on both sides of the aisle. Shifting away from the consumer welfare standard would ultimately harm consumers at a time when innovation and economic recovery are especially critical.

]]>
https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/feed/ 1 76868
Future Aviation, Drones, and Airspace Markets https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/ https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/#comments Wed, 22 Jul 2020 13:55:40 +0000 https://techliberation.com/?p=76767

My research focus lately has been studying and encouraging markets in airspace. Aviation airspace is valuable but has been assigned to date by regulatory mechanisms, custom, and rationing by industry agreement. This rationing was tolerable decades ago when airspace use was relatively light. Today, regulators need to consider markets in airspace–allowing the demarcation, purchase, and transfer of aerial corridors–in order to give later innovators airspace access, to avoid anticompetitive “route squatting,” and to serve as a revenue stream for governments, much like spectrum auctions and offshore oil leases.

Last month, the FAA came out in favor of “urban air mobility corridors”–point-to-point aerial highways that new eVTOL, helicopter, and passenger drones will use. It’s a great proposal, but the FAA’s plan for allocating and sharing those corridors is largely to let the industry negotiate it among themselves (the “Community Business Rules”):

Operations within UAM Corridors will also be supported by CBRs collaboratively developed by the stakeholder community based on industry standards or FAA guidelines and approved by the FAA.

This won’t end well, much like Congress and the Postmaster General letting the nascent airlines in the 1930s divvy up air routes didn’t end well–we’re still living with the effects of those anticompetitive decisions. Decades later the FAA is still refereeing industry fights over routes and airport access.

Rather, regulators should create airspace markets because otherwise, as McKinsey analysts noted last year about urban air mobility:

first movers will have an advantage by securing the most attractive sites along high-traffic routes.

Airspace today is a common-pool resource rationed via regulation and custom. But with drones, eVTOL, and urban air mobility, congestion will increase and centralized air traffic control will need to give way to a more federated and privately-managed airspace system. As happened with spectrum: a demand shock to an Ostrom-ian common pool resource should lead to enclosure and “propertization.”

Markets in airspace probably should have been created decades ago once airline routes became fixed and airports became congested. Instead, the centralized, regulatory rationing led to large economic distortions:

For example, in 1968, nearly one-third of peak-time New York City air traffic–the busiest region in the US–was general aviation (that is, small, personal) aircraft. To combat severe congestion, local authorities raised minimum landing fees by a mere $20 (1968 dollars) on sub 25-seat aircraft. General aviation traffic at peak times immediately fell over 30%—suggesting that a massive amount of pre-July 1968 air traffic in the region was low-value. The share of aircraft delayed by 30 or more minutes fell from 17% to about 8%.

This pricing of airspace and airport access was half-hearted and resisted by incumbents. Regulators fell back on rationing via the creation of “slots” at busy airports, which were given mostly to dominant airlines. Slots have the attributes of property–they can be defined, valued, sold, transferred, borrowed against. But the federal government refuses to call it property, partly because of the embarrassing implications. The GAO said in 2008:

[the] argument that slots are property proves too much—it suggests that the agency [FAA] has been improperly giving away potentially millions of dollars of federal property, for no compensation, since it created the slot system in 1968.

It may be too late to have airspace and route markets for traditional airlines–but it’s not too late for drones and urban air mobility. Demarcating aerial corridors should proceed quickly to bring the drone industry and services to the US. As Adam has pointed out, this is a global race of “innovation arbitrage”–drone firms will go where regulators are responsive and flexible. Federal and state aviation officials should not give away valuable drone routes, which will end up going to first-movers and the politically powerful. Airspace markets, in contrast, avoid anticompetitive lock-in effects and give drone innovators a chance to gain access to valuable routes in the future.

Research and Commentary on Airspace Markets

Law journal article. The North Carolina JOLT published my article, “Auctioning Airspace,” in October 2019. I argued for the FAA to demarcate and auction urban air mobility corridors (SSRN).

Mercatus white paper. In March 2020 Connor Haaland and I explained that federal and state transportation officials could demarcate and lease airspace to drone operators above public roads because many state laws allow local and state authorities to lease such airspace.

Law journal article. A student note in a 2020 Indiana Law Journal issue discusses airspace leasing for drone operations (pdf).

FAA report. The FAA’s Drone Advisory Committee in March 2018 took up the idea of auctioning or leasing airspace to drone operators as a way to finance the increased costs of drone regulations (pdf).

GAO report. The GAO reviewed the idea of auctioning or leasing airspace to drone operators in a December 2019 report (pdf).

Airbus UTM white paper. The Airbus UTM team reviewed the idea of auctioning or leasing airspace to UAM operators in a March 2020 report, “Fairness in Decentralized Strategic Deconfliction in UTM” (pdf).

Federalist Society video. I narrated a video for the Federalist Society in July 2020 about airspace design and drone federalism (YouTube).

Mercatus Center essay. Adam Thierer, Michael Koutrous, and Connor Haaland wrote about drone industry red tape how the US can’t have “innovation by regulatory waiver,” and how to accelerate widespread drone services.

I’ve discussed the idea in several outlets and events, including:

Podcast Episodes about Drones and Airspace Markets

  • In a Federalist Society podcast episode, Adam Thierer and I discussed airspace markets and drone regulation with US Sen. Mike Lee. (Sen. Lee has introduced a bill to draw a line in the sky at 200 feet in order to clarify and formalize federal, state, and local powers over low-altitude airspace.)
  • Tech Policy Institute podcast episode with Sarah Oh, Eli Dourado, and Tom Lenard.
  • Macro Musings podcast episode with David Beckworth.
  • Drone Radio Show podcast episode with Randy Goers.
  • Drones in America podcast episode with Grant Guillot.
  • Uncommon Knowledge podcast episode with Juliette Sellgren.
  • Building Tomorrow podcast episode with Paul Matzko and Matthew Feeney.
  • sUAS News podcast episode and interview.
]]>
https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/feed/ 1 76767
Trump’s AI Framework & the Future of Emerging Tech Governance https://techliberation.com/2020/01/08/trumps-ai-framework-the-future-of-emerging-tech-governance/ https://techliberation.com/2020/01/08/trumps-ai-framework-the-future-of-emerging-tech-governance/#respond Wed, 08 Jan 2020 20:04:57 +0000 https://techliberation.com/?p=76648

This week, the Trump Administration proposed a new policy framework for artificial intelligence (AI) technologies that attempts to balance the need for continued innovation with a set of principles to address concerns about new AI services and applications. This represents an important moment in the history of emerging technology governance as it creates a policy vision for AI that is generally consistent with earlier innovation governance frameworks established by previous administrations.

Generally speaking, the Trump governance vision for AI encourages regulatory humility and patience in the face of an uncertain technological future. However, the framework also endorses a combination of “hard” and “soft” law mechanisms to address policy concerns that have already been raised about developing or predicted AI innovations.

AI promises to revolutionize almost every sector of the economy and can potentially benefit our lives in numerous ways. But AI applications also raise a number of policy concerns, specifically regarding safety or fairness. On the safety front, for example, some are concerned about the AI systems that control drones, driverless cars, robots, and other autonomous systems. When it comes to fairness considerations, critics worry about “bias” in algorithmic systems that could deny people jobs, loans, or health care, among other things.

These concerns deserve serious consideration and some level of policy guidance or else the public may never come to trust AI systems, especially if the worst of those fears materialize as AI technologies spread. But how policy is formulated and imposed matters profoundly. A heavy-handed, top-down regulatory regime could undermine AI’s potential to improve lives and strengthen the economy. Accordingly, a flexible governance framework is needed and the administration’s new guidelines for AI regulation do a reasonably good job striking that balance.

Background

Last February, the White House issued Executive Order 13859, on “Maintaining American Leadership in Artificial Intelligence.” The Order announced the creation of the “American AI Initiative,” an effort to “focus the resources of the Federal government to develop AI.” It prioritized investments in AI-focused research and development (R&D), building a workforce ready for the AI era, international engagement on AI priorities, and the establishment governance standards for AI systems to “help Federal regulatory agencies develop and maintain approaches for the safe and trustworthy creation and adoption of new AI technologies.”

Regarding that last objective, Order 13589 required the Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP) to develop a framework and set of principles for federal agencies to follow when considering the development of regulatory and non‑regulatory approaches for AI. Importantly, the Order also specified that the framework should seek to “advance American innovation” and “reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.”

That resulted in the memorandum sent to heads of federal departments and agencies this week entitled, “Guidance for Regulation of Artificial Intelligence Applications” (hereinafter AI Guidance). The draft version of the AI Guidance specifies that “federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” More specifically:

“Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits. Where AI entails risk, agencies should consider the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace.”

But the AI Guidance is certainly not a call for comprehensive deregulation or the abandonment of all AI federal oversight. The memorandum’s very title reflects an understanding that existing laws and agency rules will continue to play a role in guiding the development of AI, machine-learning, and autonomous systems.

Accordingly, and consistent with past executive orders and OMB regulatory guidance documents for federal agencies, the AI Guidance establishes a set of ten principles that agencies must take into consideration when considering AI policy:

  1. Public trust in AI: Requiring that “the government’s regulatory and non-regulatory approaches to AI promote reliable, robust, and trustworthy AI applications, which will contribute to public trust in AI.”
  2. Public participation: Agencies must provide “ample opportunities for the public to provide information and participate in all stages of the rulemaking process.”
  3. Scientific integrity and information quality: Agencies should “leverage scientific and technical information and processes” to build trust and ensure data quality and transparency.
  4. Risk assessment and management: Acknowledging that “all activities involve tradeoffs,” the AI Guidance requires that “a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.”
  5. Benefits and costs: As part of those risk assessments, agencies must “carefully consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of AI applications. Such consideration will include the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace, whether implementing AI will change the type of errors created by the system, as well as comparison to the degree of risk tolerated in other existing ones.”
  6. Flexibility: OMB encourages agencies to “pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications.”
  7. Fairness and non-discrimination: Acknowledging that “in some instances, introduce real-world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI,” the AI Guidance requires agencies to consider “issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue.”
  8. Disclosure and transparency: Agencies are encouraged to consider how greater “transparency and disclosure can increase public trust and confidence in AI applications.”
  9. Safety and security: Agencies are required to “promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.”
  10. Interagency coordination: The guidance makes it clear that a “coherent and whole-of-government approach to AI oversight requires interagency coordination.”

Soft Law Ascends

Importantly, the AI Guidance also encourages agencies to be open to “non-regulatory approaches to AI” governance and specifies three particular models:

  • Sector-specific policy guidance or frameworks: OSTP writes that “agencies should consider using any existing statutory authority to issue non-regulatory policy statements, guidance, or testing and deployment frameworks, as a means of encouraging AI innovation in that sector.” The memorandum also notes that this can include “work done in collaboration with industry, such as development of playbooks and voluntary incentive frameworks.”
  • Pilot programs and experiments: The document encourages the use of “pilot programs that provide safe harbors for specific AI applications” which “could produce useful data to inform future rulemaking and non-regulatory approaches.”
  • Voluntary consensus standards: Before regulating, the AI Guidance encourages agencies to consider how voluntary consensus standards, assessment programs, and compliance programs might be used to address policy concerns.

These represent “soft law” approaches to technological governance and they are becoming all the rage in technology policy discussions today. Soft law mechanisms are informal, collaborative, and constantly evolving governance efforts. While not formerly binding like “hard law” rules and regulations, soft law efforts nonetheless create a set of expectations about sensible development and use of technologies. Soft law can include multistakeholder initiatives, best practices and standards, agency workshops and guidance documents, educational efforts, and much more.

Soft law has become the dominant governance approach for emerging technologies because it is often better able to address the “pacing problem,” which refers to the growing gap between the rate of technological innovation and policymakers’ ability to keep up with it. As I have previously noted, the pacing problem is “becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.”

Not only do traditional legislative and regulatory hard law systems struggle to keep up with fast-paced technological changes, but oftentimes those older mechanisms are just too rigid and unsuited for new sectors and developments. That is definitely the case for AI, which is multi-dimensional in nature and even defies easy definition. Soft law offers a more flexible, adaptive approach to learning on the fly and cobbling together principles and policies that can address new policy concerns as they develop in specific contexts, without derailing potentially important innovations.

Building on Past Governance Frameworks

In this sense, the Trump administration’s AI Guidance borrows from past policy frameworks by marrying up a desire to promote an exciting new set of emerging technologies alongside the need for reasonable but flexible oversight and governance mechanisms. At a high level, the AI Guidance builds on many of the same principles that motivated the Clinton administration’s Framework for Global Electronic Commerce, a statement of principles and policy objectives for the then-emerging Internet. The document, which was issued in July 1997, said that “governments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”

The Framework was a clean break from the top-down regulatory paradigm that had previously governed traditional communications and media technologies. Clinton’s Framework insisted that, to the extent government intervention was needed at all, “its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.” The use of soft law and multistakeholder models was a key component of this vision, and those more flexible governance approaches were tapped by the subsequent administrations to address emerging tech policy concerns.

For example, the Obama administration considerably expanded the use of multistakeholder mechanisms and other soft law tools in response to the need of oversight of fast-moving technologies. The Obama administration had many different policy governance efforts underway for specific AI technologies and concerns, including workshops and multistakeholder efforts focused on the safety, security, and privacy-related issues surrounding “big data” systems, online advertising, connected cars, drones, and more.

Whereas the Obama administration was deeper in the weeds of the policy issues associated with specific AI and machine-learning applications, the Trump administration has sought to both build on those focused efforts while also stepping back to consider AI governance at the 30,000-foot level. In essence, the AI Guidance combines some of the aspirational elements found in the Clinton Framework alongside the Obama administration’s more targeted approach to consider specific policy concerns across many different sectors and technologies.

Trump’s AI Guidance adds an element of formality to this process regarding how federal agencies should address AI developments and formulate potential policy responses. It does so by counseling humility and even potential forbearance until all the facts are in. “Fostering innovation and growth through forbearing from new regulations may be appropriate,” the memorandum says. Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary.” Again, this is very much consistent with more general regulatory guidance issued by every administration since President Reagan was in office.

Flexible, Adaptive Governance is Key

The AI Guidance foreshadows the future of not only AI governance but the governance of many other emerging technologies. Hard law will continue to provide a backstop and have a role in guiding technological developments. Toward that end, efforts like the new AI Guidance are important because it represents an effort to “regulate the regulators” by placing some ground rules on how they go about applying old law to new developments.

But soft law governance is where the real action is at, both for AI and almost all emerging technologies today. The Trump AI Guidance reflects the extent to which soft law has become the dominant governance paradigm for modern tech sectors. As my colleagues Jennifer Huddleston and Trace Mitchell have noted, soft law is already effectively the law of the land for driverless cars, for example. After years of congressional wrangling over a federal autonomous vehicle regulatory framework—one that has widespread bipartisan support, no less—we still do not have a law on the books. Instead, the Department of Transportation has been cobbling together informal “rules of the road” through informal guidance documents that have been “versioned” as if they were computer software (i.e., Version 1.0, 2.0, 3.0). Version 4.0 of the DoT guidance for automated vehicles was just released this week.

That is the same approach that the National Institute of Standards and Technology (NIST) has taken with the privacy guidelines it developed. NIST’s Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management is also versioned like software. And many other federal agencies, especially the Federal Trade Commission, have tapped a wide variety of soft law tools—such as agency workshops and workshop reports that recommended privacy best practices for various technologies. Meanwhile, the National Telecommunications and Information Administration (NTIA) has used multistakeholder processes to address privacy concerns surrounding a wide range of technologies, including drones and facial recognition. NIST, FTC, and NTIA have undertaken these informal governance efforts because, despite over a decade of debate, Congress still has not advanced comprehensive federal privacy legislation. For better or worse, soft law has filled that governance gap.

Addressing Likely Objections from Left & Right

Many people of varying ideological dispositions will object to the growing role of soft law as the primary governance tool for emerging technology policy. Some conservatives will cringe at the sound of giving regulators greater leeway to address amorphous policy concerns, fearing that it will result in unconstrained exercises of unaccountable, extra-constitutional power.

Some of those concerns are valid, but they fail to account for the fact that the prospects for agency downsizing or deregulation they prefer are extremely limited. Practically speaking, the administrative state isn’t going anywhere. In some cases, agencies can actually do some real good by encouraging innovators to think about how to “bake-in” sensible best practices to preemptively address concerns about the privacy, safety, security, and fairness of various AI systems. Better those concerns be addressed in more flexible, adaptive fashion than by a heavy-handed, overly-rigid regulatory approach. Soft law offers that possibility, even if legitimate concerns remain about agency accountability and transparency.

Many to the left of center will be critical of this governance approach as well, but on very different grounds. As Associated Press reporter Matt O’Brien notes, “the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment.”

These concerns actually are addressed in several of the OSTP’s ten principles, including those which stress the need for fairness and non-discrimination, information quality, public participation, disclosure and transparency, and safety and security. Yet many on the left will claim these principles merely pay lip service to these values and that what is really needed is a full-blown regulatory regime and some sort of corresponding new federal AI agency, which would preemptively determine which AI technologies would be allowed into the wild.

Already, an Algorithmic Accountability Act was introduced in Congress last year that would ask the FTC to take a more active role in policing “inaccurate, unfair, biased, or discriminatory decisions impacting consumers” that may have resulted from “automated decision systems.” Meanwhile, some academics have called for the creation of a Federal Robotics Commission or a National Algorithmic Technology Safety Administration to preemptively oversee new AI developments.

The problem with overly-precautionary regulation of that sort could potentially unduly limit AI innovation and the many benefits it entails. There may be some AI applications that pose serious and immediate risks to humanity and which require preemptive restraints on their development and use. Autonomous military and law enforcement applications are the most obvious examples. But most AI applications do not rise to that same level of regulatory concern, and other governance approaches are required to balance the use and misuse of them. This is why a more open and flexible governance approach is needed. Moreover, the old regulatory system just cannot keep up anymore, and it is ill-suited to address most policy concerns in a timely or efficient fashion.

Cristie Ford, and advocate of greater regulatory oversight for fintech, notes in her latest book that the problem with “old-style Welfare State regulation” is that it is “a clumsy, blunt instrument for achieving regulatory objectives” due to its reliance upon “one-size-fits-all mandates, prohibitions, and penalties.” Ford acknowledges what many other regulatory advocates are reluctant to admit:  public policies toward fast-paced technology sectors can no longer be governed effectively using the Analog Era’s top-down, command-and-control regulatory processes. Far too many federal agencies rely on a “build-and-freeze model” of regulation that puts rules in stone to deal with one sets of issues one day, but then either fails to eliminate them later when they become obsolete or to reform those rules to bring them in line with new social, economic, and technical realities.

If we hope to encourage continued innovation in sectors that could produce profoundly important, life-enriching technologies, America’s regulatory approach for AI and emerging technology needs to move away from “build-and-freeze” and toward “build-and-adapt.” Regulation is still needed, but the old regulatory toolkit is badly broken. For better or worse, soft law is going to fill the resulting governance gap, regardless of objections from some on the left or the right. Pragmatic policymaking is going to carry the day for emerging technology governance.

Conclusion

The Trump Administration AI Guidance represents a continuation and extension of this trend toward more flexible, adaptive governance approaches for emerging technologies. It offers a pragmatic vision that builds on the policies and paradigms of the past, while also encouraging fresh thinking about how best to balance the need for continued innovation alongside the various concerns about disruptive technological change.

There are many challenging issues that lie ahead and the new AI Guidance cannot provide bright-line answers to all the hypothetical questions that people want answered today. No one possesses a crystal ball that will allow them to forecast the technological future. Only ongoing trial-and-error experimentation and policy improvisation will allow us to find sensible solutions. A policy approach rooted in humility, flexibility, and forbearance will help ensure that America’s regulatory policies continue to promote both innovation and the public good.

]]>
https://techliberation.com/2020/01/08/trumps-ai-framework-the-future-of-emerging-tech-governance/feed/ 0 76648
The Case for Sanctuary Cities in Many Different Contexts https://techliberation.com/2020/01/02/the-case-for-sanctuary-cities-in-many-different-contexts/ https://techliberation.com/2020/01/02/the-case-for-sanctuary-cities-in-many-different-contexts/#respond Thu, 02 Jan 2020 22:09:42 +0000 https://techliberation.com/?p=76644

[Cross-posted to Medium.]

The spread of “sanctuary cities”—local governments that resist federal laws or regulations in some fashion, and typically for strongly-held moral reasons—is one of the most interesting and controversial governance developments of recent decades. Unfortunately, the concept receives only a selective defense from people when it fits their narrow political objectives, such as sanctuary movements for immigration and gun rights.

But there is broader case to be made for sanctuaries in many different contexts as a way to encourage experiments in alternative governance models and just let people live lives of their choosing. The concept faces many challenges in practice, however, and I remain skeptical that sanctuary cities will ever scale up and become a widespread governance phenomenon. There’s just too much for federal officials to lose and they likely will crush any particular sanctuary movement that gains serious steam.

Sanctuary Cities as Political Civil Disobedience

First, let’s think about what local officials are really doing when they declare themselves a sanctuary. (Because they can be formed by city, county, or state governments, I will just use “sanctuaries” as a shorthand throughout this essay.)

Academics use the term “rule departure” when referencing “deliberate failures, often for conscientious reasons, to discharge the duties of one’s office.” [Joel Feinberg, “Civil Disobedience in the Modern World,” in Humanities in Society, Vol. 2, No. 1, 1979, p 37.] In this sense, sanctuary cities could be viewed as a type of collective civil disobedience by public officials because these governance arrangements are typically defended on moral grounds and represent an active form of resistance to policies imposed by higher-ups.

Rule departure and political civil disobedience can be carried out by individual government officials or entire governing bodies. Back in the 1970s, for example, some judges refused to convict Vietnam-era “draft dodgers,” even though laws made it clear that they were supposed to be punished. And, although it is rare, juries have sometimes nullified laws that they find unconscionable.

When a legislature engages in rule departure, it is often in opposition to federal policies that local officials feel is unfair or unethical. They may even declare themselves in a sort of open rebellion against a very specific directive and steadfastly refuse to acknowledge the legitimacy of the policies being imposed from above. This is how modern sanctuaries developed. In my forthcoming book, Evasive Entrepreneurs & the Future of Governance, I discuss a couple of prominent recent examples.

When state lawmakers refuse to enforce federal marijuana restrictions because officials in those states favor decriminalization that represents rule departure between levels of government. Similarly, in May 2018, Vermont became the first state to legalize the importation prescription drugs from Canada in an attempt to gain access to lower-priced drugs for its citizens. That policy departed from federal law, which tightly controls the importation of drugs into the US.

Rule departures by city and county governments can be even more daring and far-reaching in effect.  After the Trump Administration took office and announced more restrictive immigration policies, many mayors and local officials promptly announced that they would become sanctuary cities and not follow federal immigration reporting requirements. The number of immigration-related sanctuary cities, counties, and even entire states has grown steadily since then. [The Center for Immigration Studies keeps a running list.]

Even more controversial is the rise of the “Second Amendment sanctuary” movement that resists state or federal firearm restrictions. Virginia cities and counties have been particularly aggressive in declaring themselves gun sanctuaries, but the movement is nationwide and growing fast. Interestingly, the leaders of this movement include many local officials, including some sheriffs, who actively oppose immigration-related sanctuary cities. Conversely, most of the local officials who favor immigration sanctuaries oppose Second Amendment sanctuaries. The only thing unifying officials on either side is a commitment to engage in rule departure for moral reasons.

But here’s the question I want to explore: Why not give both these sanctuary movements (and many others) a chance, regardless of what motivates them?

A Sanctuary for Me, But Not for Thee

Of course, there are few issues that divide the Left and the Right more bitterly these days than immigration and guns, and neither side will accept the moral case for rule departure when the other side is promoting it. Stated differently, while each side will make strong moral claims in favor of rule departure for their pet issue, their defense will not extend to the underlying act of rule departure or political civil disobedience more generally.

And that’s a shame. There is a good case to be made not just for greater localized decision-making and policy experimentation, but also for letting people lives of their own choosing in different governance arrangements.

The idea that we could ever have of one single utopia has always been a silly notion for a simple reason: People are just very different. What would make more sense, the late philosopher Robert Nozick once argued, is a governance arrangement that was truly fit for a pluralistic society. In his 1974 book, Anarchy, State, and Utopia, Nozick made the case for a regime in which citizens could potentially take advantage of many different utopias to better fit their preferred governance arrangements. “Utopia is a framework for utopias, a place where people are at liberty to join together voluntarily to pursue and attempt to realize their own vision of the good life in the ideal community but where no one can impose his own utopian vision upon others,” he said.

I’ve always found this “utopia of utopias” vision enormously compelling in theory but somewhat unrealistic in practice. It is appealing precisely because it rejects any effort to define utopia in a monolithic fashion. A true utopia would reject one-size-fits-all governance schemes and instead promote a framework for optimizing an individual’s ability to choose their preferred governance arrangement (hopefully among many options). “There is no reason to think that there is one community which will serve as ideal for all people,” Nozick noted, “and much reason to think that there is not.”

Indeed, it is likely that my preferred utopia is not yours. What’s my particular sanctuary look like? Adam Smith argued in 1755 that all that was needed for lifting civilization up “from the lowest barbarism” to “the highest degree of opulence” is “peace, easy taxes, and a tolerable administration of justice; all the rest being brought about by the natural course of things.” More recently, Emily Chamlee-Wright, president of the Institute for Humane Studies, elaborated on this vision when she identified the core elements of a good society as, “a pluralistic and tolerant society in which intellectual and economic progress are the norm, and where individuals and communities flourish in a context of openness, peaceful and voluntary cooperation, and mutual respect.”

That pretty much sums up the utopia or sanctuary I’d like to live in. More concretely, my perfect sanctuary would combine elements of all the real-world sanctuary cities described above. It would give immigrants safe haven and allow everyone to carry firearms openly while also ignoring federal marijuana restrictions and drug importation rules! Moreover, drones would zip through the air delivering goods (regardless of what the FAA said), driverless cars would occupy the roads (regardless of what the DOT said), and citizens with serious illnesses would be more free to try alternative treatments (regardless of what the FDA said).

Of course, I also appreciate that many other people would prefer to live in sanctuaries where government plays are a far more active role. Might it be possible for us all to agree to live peacefully in our separate utopias, yet also remain part of some loosely unified federation? What would help make that model work, Nozick argued, was some sort of minimal state above all the utopias that ensured peace and free movement of people, goods, and information among them. So, you pick your utopia and I’ll pick mine, but let us agree to be free to trade with each other and move to other utopias if we are not satisfied.

That remains a beautiful governance vision to me, and, if nothing else, I hope others would appreciate the potential benefits associated with experimentation in government administration. In his 1970 book, Exit, Voice and Loyalty, the economist and political theorist Albert Hirschman discussed the interplay between “voice” and “exit”—for businesses, organizations, and even governments—and argued that, “exit has an essential role to play in restoring quality performance of government, just as in any organization.”

Sanctuaries represent a form of localized collective voice (opposing specific policy choices made by higher-ups) combined with the implicit threat of some sort of exit. “The chances for voice to function effectively as a recuperation mechanism,” Hirschman argued, “are appreciably strengthened if voice is backed up by the threat of exit, whether it is made openly or whether the possibility of exit is merely well understood to be an element in the situation by all concerned.” I doubt any cities, counties, or states are going to try to completely exit the American republic over the issues that led them to form sanctuaries. Nonetheless, sanctuaries— and even the very threat to form one—can still act as a sort of relief valve that allow citizens to push back against over-zealous edicts from above, while also potentially giving citizens the chance to “shop around” for better jurisdictional governance arrangements.

Haven’t We Already Tried This?

Practically speaking, however, a utopia of utopias must have some limits or else it breaks down under the weight of endless splintering, border disputes, and even the threat of violence. As the Wall Street Journal editorial board argued in a recent essay about sanctuary cities, an atomistic patchwork of breakaway sub-governments could lead to discord and “lawlessness.” And that was in an editorial about Second Amendment sanctuary cities, which the Journal is more ideologically predisposed to favor!

But this is not a completely unfounded concern. Think about American history. Many people forget that America’s current constitution is not our nation’s first. The Articles of Confederation were formulated by the 13 original colonies as they fought for their independence from Great Britain. The Articles were a dismal failure, however, and did not even last a decade. America’s Founders abandoned the Articles because the sole governing agent—Congress—lacked any real power. It couldn’t do much to sustain itself or an army to defend the new nation, which the Articles treated as more of just a collection of territories in “a firm league of friendship with each other.”

More importantly, because states retained all the real power under the Articles, trade skirmishes broke out among them and Congress was virtually powerless to do anything about it. The so-called “league of friendship” threatened to degenerate into endless commercial and political conflicts among loosely joined state sovereigns. The situation grew intolerable and by 1789 the Articles were discarded in favor of a new Constitution that opted for a more tightly integrated union, which would guarantee some basic rights and also help ensure that commerce and people could move freely across state borders.

The durability of this framework remains a remarkable achievement and, in some ways, could be viewed as a more workable “utopia of utopias” than what the Articles of Confederation proposed. Yet, while plenty of people still play up the benefits of devolution and local control, American federalism has been increasingly neutered over the past century. The federal government came to take on more and more authority over even the most trivial parochial matters. States and localities must now beg for freedoms from federal restrictions, but they usually cave fairly quickly and fall in line with federal demands at the mere threat of federal lawmakers just denying them a few grants. Political kickbacks, it turns out, is a remarkably simple way to get subordinate bodies to fall in line and comply with top-down edicts.

Does a Broader Sanctuary Movement Have Any Hope?

Which is why it is remarkable that the sanctuary city movement is still alive at all. It might be because, as George Mason University law professor Ilya Somin has suggested, many Democrats fell back in love with federalism following the election of Donald Trump. Devolution and local control suddenly sounds a lot more appealing to many Dems when it becomes a way to resist federal restrictions on immigration and marijuana decontrol, among other issues.

It could still be the case that these sanctuary movements will be brought to heel in coming years. Current sanctuary efforts provide a good litmus test for just how much real-world policy experimentation federal officials are willing to tolerate. To the extent any particular sanctuary effort gained meaningful momentum and posed a serious challenge to federal power in some fashion, I believe it would likely be crushed eventually. While plenty of politicians provide lip service to the idea “reinventing government” and enhancing local decision-making, the reality is that if we ever had anything approximating actual entrepreneurial government administration in this country, the feds would likely move quickly to snuff it out.

If the Supreme Court took action to limit semi-rebellious efforts like these, it would also discourage future sanctuary city experiments. But it is more likely that, as suggested above, federal officials would just double-down on the “power of the purse” to intimidate state officials into complying—and then presumably force governors and state legislatures to do the dirty work of cracking down on cities and counties that won’t comply with federal demands. President Trump has already tapped this playbook to threaten immigration sanctuaries with Executive Order 13768 of January 25, 2017, which sought to “[e]nsure that jurisdictions that fail to comply with applicable Federal law do not receive Federal funds.” Lower courts have pushed back, however, and a bit of a stalemate has ensued.

If things got really ugly, one could imagine President Trump or a future Democratic president calling in the National Guard to deal with sanctuaries that really pushed the limits on immigration, guns, or anything else disfavored by the powers that be. God help us if we get to that point. Hopefully cooler heads will prevail.

A Dream Deferred

In the meantime, I will persist in making the case for sanctuaries and other forms of experimental government—including charter cities and special economic zones—more generally. I remain a bit of a dreamer and will continue to defend alternative governance visions based on the benefits associated with political decentralization, policy experimentation, and citizen choice. I continue to long for Nozick’s noble vision of, “a society in which utopian experimentation can be tried, different styles of life can be lived, and alternative visions of the good can be individually or jointly pursued.”

Alas, I am also a political realist and I recognize it is highly quixotic to believe that this governance framework will carry the day in the short-term. Selective morality will prevail instead. That is, most people will loudly proclaim the moral imperative of sanctuaries only when it fits their ideological priors, while equally vociferously decrying creative governance alternatives when they do not align with their political values. In the end, both sides will only succeed in crushing the broader dream of more decentralized communities of common interest, simply because a lot pf people just cannot tolerate giving others a little zone of freedom in this world.

And so a “utopia of utopias” will likely remain a dream deferred.

]]>
https://techliberation.com/2020/01/02/the-case-for-sanctuary-cities-in-many-different-contexts/feed/ 0 76644
Is Europe Leading the US in Telecom Competition? Notes on Philippon’s “Great Reversal” https://techliberation.com/2019/12/17/is-europe-leading-the-us-in-telecom-competition-notes-on-philippons-great-reversal/ https://techliberation.com/2019/12/17/is-europe-leading-the-us-in-telecom-competition-notes-on-philippons-great-reversal/#respond Tue, 17 Dec 2019 20:37:13 +0000 https://techliberation.com/?p=76641

After coming across some reviews of Thomas Philippon’s book, The Great Reversal: How America Gave Up on Free Markets, I decided to get my hands on a copy. Most of the reviews and coverage mention the increasing monopoly power of US telecom companies and rising prices relative to European companies. In fact, Philippon tells readers in the intro of the book that the question that spurred him to write Great Reversal is “Why on earth are US cell phone plans so expensive?”

As someone who follows the US mobile market closely, I was a little disappointed that the analysis of the telecom sectors is rather slim. There’s only a handful of pages (out of 340) of Europe-US telecom comparison, featuring one story about French intervention and one chart. This isn’t a criticism of the book–Philippon doesn’t pitch it as a telecom policy book. However, the telecom section in the book isn’t the clear policy success story it’s described as.

The general narrative in the book is that US lawmakers are entranced by the laissez-faire Chicago school of antitrust and placated by dark money campaigns. The result, as Philippon puts it, is that “Creeping monopoly power has slowly but surely suffocated the [US] middle class” and today Europe has freer markets than the US. That may be, but the telecom sectors don’t provide much support for that idea.

Low Prices in European Telecom . . .

Philippon says that “The telecommunications industry provides another example of successful competition policy in Europe.”

He continues:

The case of France provides a striking example of competition. Free Mobile . . . obtained its 4G license [with regulator assistance] in 2011 and became a significant competitor for the three large incumbents. The impact was immediate. . . . In about six months after the entry of Free Mobile, the price paid by French consumers had dropped by about 40 percent. Wireless services in France had been more expensive in the US, but now they are much cheaper.

It’s true, mobile prices are generally lower in Europe. Monthly average revenue per user (ARPU) in the US, for instance, is about double the ARPU in the UK (~$42 v. ~$20 in 2016). And, as Philippon points out, cellular prices are lower in France as well.

One issue with this competition “success story”: the US also has four mobile carriers, and had four mobile carriers even prior to 2011. Since the number of competitors is the same in France and the US, competition doesn’t really explain why there’s a price difference between France and the US. (India, for instance, has fewer providers than the US and France–and much lower cellular prices, so number of competitors isn’t a great predictor of pricing.)

. . . and Low Investment

If “lower telecom prices than the US” is the standard, then yes, European competition policy has succeeded. But if consumers and regulators prioritize other things, like industry investment, network quality (fast speeds), and rural coverage, the story is much more mixed. (Bret Swanson at AEI points to other issues with Philippon’s analysis.) Philippon’s singular focus on telecom prices and number of competitors distracts from these other important competition and policy dimensions.

According to OECD data, for instance, in 2015 the US exceeded the OECD average for spending on IT and communications equipment as a percent of GDP. France might have lower cell phone bills, but US telecom companies spend 275% more than French telecom companies on this measure (1.1% of GDP v. 0.4% of GDP) .

Further, telecom investment per capita in the US was much higher than its European counterparts. US telecom companies spent about 55 percent more per capita than French telecoms spent ($272 v. $175), according to the same OECD reports. And France is one of the better European performers. Many European carriers spend, on a per capita basis, less than half what US carriers spend. US carriers spend 130% more than UK telecoms spend and 145% more than German telecoms.

This investment deficit in Europe has real-world effects on consumers. OpenSignal uses crowdsourced data and software to determine how frequently users phones have a 4G LTE network available (a proxy for coverage and network quality) around the world. The US ranked fourth the world (86%) in 2017, beating out every European country, save Norway. In contrast, France and Germany ranked 60th and 61st, respectively, for this network quality measure, beat out by less wealthy nations like Kazakhstan, Cambodia, and Romania. 

The European telecom regulations and anti-merger policies created a fragmented market and financially strapped companies. As a result, investors are fleeing European telecom firms. According to the Financial Times and Bloomberg data, between 2012 and 2018, the value of Europe’s telecom companies fell almost 50%. The value of the US sector rose by 70% and the Asian sector rose by 13% in that time period.  

Price Wars or 5G Investment?

Philippon is right that Europe has chosen a different path than the US when it comes to telecom services. Whether they’ve chosen a pro-consumer path depends on where you sit (and live). Understandably, academics and advocates living in places like Boston, New York and DC look fondly at Berlin and Paris broadband prices. Network quality outside of the cities and suburbs rarely enters the picture in these policy discussions, and Philippon’s book is no exception. US lawmakers and telecom companies have prioritized non-price dimensions: network quality, investment in 5G, and rural coverage.

If anything, European regulators seem to be retreating somewhat from the current path of creating competitors and regulating prices. As the Financial Times wrote last year, the trend in Europe telecom is consolidation. The French regulator ARCEP reversed course last year signaled a new openness to telecom consolidation.

Still, there are significant obstacles to consolidation in European markets, and it seems likely they’ll fall further behind the US and China in rural network coverage and 5G investment. European telecom companies are in a bit of panic about this, which they expressed in a letter to the European Commission this month, urging reform.

In short, European telecom competition policy is not the unqualified success depicted in Great Reversal. To his credit, Philippon in the book intro emphasizes humility about prognostications and the limits of experts’ knowledge:

I readily admit I don’t have all the answers. …I would suggest . . . that [economists’] prescriptions be taken with a (large) grain of salt. When you read an author or commentator who tells you something obvious, take your time and do the math. Almost every time, you’ll discover that it wasn’t really obvious at all. I have found that people who tell you that the answers to the big questions in economics are obvious are telling you only half of the story.

Couldn’t have put it better myself.

Credit to Connor Haaland for research assistance.

]]>
https://techliberation.com/2019/12/17/is-europe-leading-the-us-in-telecom-competition-notes-on-philippons-great-reversal/feed/ 0 76641
15 Years of the Tech Liberation Front: The Greatest Hits https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/ https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/#comments Thu, 15 Aug 2019 14:34:51 +0000 https://techliberation.com/?p=76579

The Technology Liberation Front just marked its 15th year in existence. That’s a long time in the blogosphere. (I’ve only been writing at TLF since 2012 so I’m still the new guy.)

Everything from Bitcoin to net neutrality to long-form pieces about technology and society were featured and debated here years before these topics hit the political mainstream.

Thank you to our contributors and our regular readers. Here are the most-read tech policy posts from TLF in the past 15 years (I’ve omitted some popular but non-tech policy posts).

No. 15: Bitcoin is going mainstream. Here is why cypherpunks shouldn’t worry. by Jerry Brito, October 2013

Today is a bit of a banner day for Bitcoin. It was five years ago today that Bitcoin was first described in a paper by Satoshi Nakamoto. And today the New York Times has finally run a profile of the cryptocurrency in its “paper of record” pages. In addition, TIME’s cover story this week is about the “deep web” and how Tor and Bitcoin facilitate it.

The fact is that Bitcoin is inching its way into the mainstream.

No. 14: Is fiber to the home (FTTH) the network of the future, or are there competing technologies? by Roslyn Layton, August 2013

There is no doubt that FTTH is a cool technology, but the love of a particular technology should not blind one to look at the economics.  After some brief background, this blog post will investigate fiber from three perspectives (1) the bandwidth requirements of web applications (2) cost of deployment and (3) substitutes and alternatives. Finally it discusses the notion of fiber as future proof.

No. 13: So You Want to Be an Internet Policy Analyst? by Adam Thierer, December 2012

Each year I am contacted by dozens of people who are looking to break into the field of information technology policy as a think tank analyst, a research fellow at an academic institution, or even as an activist. Some of the people who contact me I already know; most of them I don’t. Some are free-marketeers, but a surprising number of them are independent analysts or even activist-minded Lefties. Some of them are students; others are current professionals looking to change fields (usually because they are stuck in boring job that doesn’t let them channel their intellectual energies in a positive way). Some are lawyers; others are economists, and a growing number are computer science or engineering grads. In sum, it’s a crazy assortment of inquiries I get from people, unified only by their shared desire to move into this exciting field of public policy.

. . . Unfortunately, there’s only so much time in the day and I am sometimes not able to get back to all of them. I always feel bad about that, so, this essay is an effort to gather my thoughts and advice and put it all one place . . . .

No. 12: Violent Video Games & Youth Violence: What Does Real-World Evidence Suggest? by Adam Thierer, February 2010

So, how can we determine whether watching depictions of violence will turn us all into killing machines, rapists, robbers, or just plain ol’ desensitized thugs? Well, how about looking at the real world! Whatever lab experiments might suggest, the evidence of a link between depictions of violence in media and the real-world equivalent just does not show up in the data. The FBI produces ongoing Crime in the United States reports that document violent crimes trends. Here’s what the data tells us about overall violent crime, forcible rape, and juvenile violent crime rates over the past two decades: They have all fallen. Perhaps most impressively, the juvenile crime rate has fallen an astonishing 36% since 1995 (and the juvenile murder rate has plummeted by 62%).

No. 11: Wedding Phtography and Copyright Release by Tim Lee, September 2008

I’m getting married next Spring, and I’m currently negotiating the contract with our photographer. The photography business is weird because even though customers typically pay hundreds, if not thousands, of dollars up front to have photos taken at their weddings, the copyright in the photographs is typically retained by the photographer, and customers have to go hat in hand to the photographer and pay still more money for the privilege of getting copies of their photographs.

This seems absurd to us . . . .

No. 10: Why would anyone use Bitcoin when PayPal or Visa work perfectly well? by Jerry Brito, December 2013

A common question among smart Bitcoin skeptics is, “Why would one use Bitcoin when you can use dollars or euros, which are more common and more widely accepted?” It’s a fair question, and one I’ve tried to answer by pointing out that if Bitcoin were just a currency (except new and untested), then yes, there would be little reason why one should prefer it to dollars. The fact, however, is that Bitcoin is more than money, as I recently explained in Reason. Bitcoin is better thought of as a payments system, or as a distributed ledger, that (for technical reasons) happens to use a new currency called the bitcoin as the unit of account. As Tim Lee has pointed out, Bitcoin is therefore a platform for innovation, and it is this potential that makes it so valuable.

No. 9: The Hidden Benefactor: How Advertising Informs, Educates & Benefits Consumers by Adam Thierer & Berin Szoka, February 2010

Advertising is increasingly under attack in Washington. . . . This regulatory tsunami could not come at a worse time, of course, since an attack on advertising is tantamount to an attack on media itself, and media is at a critical point of technological change. As we have pointed out repeatedly, the vast majority of media and content in this country is supported by commercial advertising in one way or another-particularly in the era of “free” content and services.

No. 8: Reverse Engineering and Innovation: Some Examples by Tim Lee, June 2006

Reverse engineering the CSS encryption scheme, by itself, isn’t an especially innovative activity. However, what I think Prof. Picker is missing is how important such reverse engineering can be as a pre-condition for subsequent innovation. To illustrate the point, I’d like to offer three examples of companies or open source projects that have forcibly opened a company’s closed architecture, and trace how these have enabled subsequent innovation . . . .

No. 7: Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society by Adam Thierer, January 2010

The cycle goes something like this. A new technology appears. Those who fear the sweeping changes brought about by this technology see a sky that is about to fall. These “techno-pessimists” predict the death of the old order (which, ironically, is often a previous generation’s hotly-debated technology that others wanted slowed or stopped). Embracing this new technology, they fear, will result in the overthrow of traditions, beliefs, values, institutions, business models, and much else they hold sacred.

The pollyannas, by contrast, look out at the unfolding landscape and see mostly rainbows in the air. Theirs is a rose-colored world in which the technological revolution du jour is seen as improving the general lot of mankind and bringing about a better order. If something has to give, then the old ways be damned! For such “techno-optimists,” progress means some norms and institutions must adapt—perhaps even disappear—for society to continue its march forward.

No. 6: Copyright Duration and the Mickey Mouse Curve by Tom Bell, August 2009

Given the rough-and-tumble of real world lawmaking, does the rhetoric of “delicate balancing” merit any place in copyright jurisprudence? The Copyright Act does reflect compromises struck between the various parties that lobby congress and the administration for changes to federal law. A truce among special interests does not and cannot delicately balance all the interests affected by copyright law, however. Not even poetry can license the metaphor, which aggravates copyright’s public choice affliction by endowing the legislative process with more legitimacy than it deserves. To claim that copyright policy strikes a “delicate balance” commits not only legal fiction; it aids and abets a statutory tragedy.

No. 5: Cyber-Libertarianism: The Case for Real Internet Freedom by Adam Thierer & Berin Szoka, August 2009

Generally speaking, the cyber-libertarian’s motto is “Live & Let Live” and “Hands Off the Internet!” The cyber-libertarian aims to minimize the scope of state coercion in solving social and economic problems and looks instead to voluntary solutions and mutual consent-based arrangements.

Cyber-libertarians believe true “Internet freedom” is freedom from state action; not freedom for the State to reorder our affairs to supposedly make certain people or groups better off or to improve some amorphous “public interest”—an all-to convenient facade behind which unaccountable elites can impose their will on the rest of us.

No. 4: Here’s why the Obama FCC Internet regulations don’t protect net neutrality by Brent Skorup, July 2017

It’s becoming clearer why, for six years out of eight, Obama’s appointed FCC chairmen resisted regulating the Internet with Title II of the 1934 Communications Act. Chairman Wheeler famously did not want to go that legal route. It was only after President Obama and the White House called on the FCC in late 2014 to use Title II that Chairman Wheeler relented. If anything, the hastily-drafted 2015 Open Internet rules provide a new incentive to ISPs to curate the Internet in ways they didn’t want to before.

No. 3: 10 Years Ago Today… (Thinking About Technological Progress) by Adam Thierer, February 2009

As I am getting ready to watch the Super Bowl tonight on my amazing 100-inch screen via a Sanyo high-def projector that only cost me $1,600 bucks on eBay, I started thinking back about how much things have evolved (technologically-speaking) over just the past decade. I thought to myself, what sort of technology did I have at my disposal exactly 10 years ago today, on February 1st, 1999? Here’s the miserable snapshot I came up with . . . .

No. 2: Regulatory Capture: What the Experts Have Found by Adam Thierer, December 2010

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity. Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism. . . . Yet, countless studies have shown that regulatory capture has been at work in various arenas: transportation and telecommunications; energy and environmental policy; farming and financial services; and many others.

No. 1: Defining “Technology” by Adam Thierer, April 2014

I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” . . . Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research.

]]>
https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/feed/ 1 76579
How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/ https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/#comments Thu, 20 Jun 2019 01:09:52 +0000 https://techliberation.com/?p=76507

I have been covering telecom and Internet policy for almost 30 years now. During much of that time – which included a nine year stint at the Heritage Foundation — I have interacted with conservatives on various policy issues and often worked very closely with them to advance certain reforms.

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however. President Trump and Sen. Ted Cruz, for example, have been increasingly critical of both traditional media and new tech companies in various public statements and suggested an openness to increased regulation. The President has gone after old and new media outlets alike, while Sen. Cruz (along with others like Sen. Lindsay Graham) has suggested during congressional hearings that increased oversight of social media platforms is needed, including potential antitrust action.

Meanwhile, during his short time in office, Sen. Josh Hawley (R-Mo.) has become one of the most vocal Internet critics on the Right. In a shockingly-worded USA Today editorial in late May, Hawley said, “social media wastes our time and resources” and is “a field of little productive value” that have only “given us an addiction economy.” He even referred to these sites as “parasites” and blamed them for a long list of social problems, leading him to suggest that, “we’d be better off if Facebook disappeared” along with various other sites and services.

Hawley’s moral panic over social media has now bubbled over into a regulatory crusade that would unleash federal bureaucrats on the Internet in an attempt to dictate “fair” speech on the Internet. He has introduced an astonishing piece of legislation aimed at undoing the liability protections that Internet providers rely upon to provide open platforms for speech and commerce. If Hawley’s absurdly misnamed new “Ending Support for Internet Censorship Act” is implemented, it would essentially combine the core elements of the Fairness Doctrine and Net Neutrality to create a massive new regulatory regime for the Internet.

The bill would gut the immunities Internet companies enjoy under 47 USC 230 (“Section 230”) of the Communications Decency Act. Eric Goldman of the Santa Clara University School of Law has described Section 230 as the “best Internet law” and “a big part of the reason why the Internet has been such a massive success.” Indeed, as I pointed out in a Forbes column on the occasion of its 15th anniversary, Section 230 is “the foundation of our Internet freedoms” because it gives online intermediaries generous leeway to determine what content and commerce travels over their systems without the fear that they will be overwhelmed by lawsuits if other parties object to some of that content.

The Hawley bill would overturn this important legal framework for Internet freedom and instead replace it with a new “permissioned” approach. In true “Mother-May-I” style, Internet companies would need to apply for an “immunity certification” from the FTC, which would undertake investigations to determine if the petitioning platform satisfied a “requirement of politically unbiased content moderation.”

The vague language of the measure is an open invitation to massive political abuse. The entirety of the bill hinges upon the ability of Federal Trade Commission officials to define and enforce “political neutrality” online. Let’s consider what this will mean in practice.

Under the bill, the FTC must evaluate whether platforms have engaged in “politically biased moderation,” which is defined as moderation practices that are supposedly, “designed to negatively affect” or “disproportionately restricts or promote access to … a political party, political candidate, or political viewpoint.” As Blake Reid of the University of Colorado Law School rightly asks, “How, exactly, is the FTC supposed to figure out what the baseline is for ‘disproportionately restricting or promoting’? How much access or availability to information about political parties, candidates, or viewpoints is enough, or not enough, or too much?”

There is no Goldilocks formula for getting things just right when it comes to content moderation. It’s a trial-and-error process that is nightmarishly difficult because of the endless eye-of-the-beholder problems associated with constructing acceptable use policies for large speech platforms. We struggled with the same issues in the broadcast and cable era, but they have been magnified a million-fold in the era of the global Internet with the endless tsunami of new content that hits our screens and devices every day. “Do we want less moderation?” asks Sec, 230 guru Jeff Kosseff. “I think we need to look at that question hard.  Because we’re seeing two competing criticisms of Section 230,” he notes. “Some argue that there is too much moderation, others argue that there is not enough.”

The Hawley bill seems to imagine that a handful of FTC officials will magically be able to strike the right balance through regulatory investigations. That’s a pipe dream, of course, but let’s imagine for a moment that regulators could somehow sort through all the content on message boards, tweets, video clips, live streams, gaming sites, and whatever else, and then somehow figure out what constituted a violation of “political neutrality” in any given context. That would actually be a horrible result because let’s be perfectly clear about what that would really be: It would be a censorship board. By empowering unelected bureaucrats to make decisions about what constitutes “neutral” or “fair” speech, the Hawley measure would, as Elizabeth Nolan Brown of Reason summarizes, “put Washington in charge of Internet speech.” Or, as Sen. Ron Wyden argues more bluntly, the bill “will turn the federal government into Speech Police.” “Perhaps a more accurate title for this bill would be ‘Creating Internet Censorship Act,'” Eric Goldman is forced to conclude.

The measure is creating other strange bedfellows. You won’t see Berin Szoka of TechFreedom and Harold Feld of Public Knowledge ever agreeing on much, but they both quickly and correctly labelled Hawley’s bill a “Fairness Doctrine for the Internet.” That is quite right, and much like the old Fairness Doctrine, Hawley’s new Internet speech control regime would be open to endless political shenanigans as parties, policymakers, companies, and the various complainants line up to have their various political beefs heard and acted upon. “That’s the kind of thing Republicans said was unconstitutional (and subject to FCC agency capture and political manipulation) for decades,” says Daphne Keller of the Stanford Center for Internet & Society. Moreover, during the Net Neutrality holy wars, GOP conservatives endlessly blasted the notion that bureaucrats should be determining what constitute “neutrality” online because it, too, would result in abuses of the regulatory process. Yet, Sen. Hawley’s bill would now mandate that exact same thing.

What is even worse is that, as law professor Josh Blackman observes, “the bill also makes it exceedingly difficult to obtain a certification” because applicants need a supermajority of 4 of the 5 FTC Commissioners. This is public choice fiasco waiting to happen. Anyone who has studied the long, sordid history of broadcast radio and television licensing understands the danger associated with politicizing certification processes. The lawyers and lobbyists in the DC “swamp” will benefit from all the petitioning and paperwork, but it is not clear how creating a regulatory certification regime for Internet speech really benefits the general public (or even conservatives, for that matter).

Former FTC Commissioner Josh Wright identifies another obvious problem with the Hawley Bill: it “offers the choice of death by bureaucratic board or the plaintiffs’ bar.” That’s because by weakening Sec. 230’s protections, Hawley’s bill could open the floodgates to waves of frivolous legal claims in the courts if companies can’t get (or lose) certification. The irony of that result, of course, is that this bill could become a massive gift to the tort bar that Republicans love to hate!

Of course, if the law ever gets to court, it might be ruled unconstitutional. “The terms ‘politically biased’ and ‘moderation’ would have vagueness and overbreadth problems, as they can chill protected speech,” Josh Blackman argues. So it could, perhaps, be thrown out like earlier online censorship efforts. But a lot of harm could be done—both to online speech and competition—in the years leading up to a final determination about the law’s constitutionality by higher courts.

What is most outrageous about all this is that the core rationale behind Hawley’s effort—the idea that conservatives are somehow uniquely disadvantaged by large social media platforms—is utterly preposterous. In May, the Trump Administration launched a “tech bias” portal which “asked Americans to share their stories of suspected political bias.” The portal is already closed and it is unclear what, if anything, will come out of this effort. But this move and Hawley’s proposal point to the broader trend of conservatives getting more comfortable asking Big Government to redress imaginary grievances about supposed “bias” or “exclusion.”

In reality, today’s social media tools and platforms have been the greatest thing that ever happened to conservatives. Mr. Trump owes his presidency to his unparalleled ability to directly reach his audience through Twitter and other platforms. As recently as June 12, President Trump tweeted, “The Fake News has never been more dishonest than it is today. Thank goodness we can fight back on Social Media.” Well, there you have it!

Beyond the President, one need only peruse any social media site for a few minutes to find an endless stream of conservative perspectives on display. This isn’t exclusion; it’s amplification on steroids. Conservatives have more soapboxes to stand on and preach than ever before in the history of this nation.

Finally, if they were true to their philosophical priors, then conservatives also would not be insisting that they have any sort of “right” to be on any platform. These are private platforms, after all, and it is outrageous to suggest that conservatives (or any other person or group) are entitled to have a spot on any other them.

Some conservatives are fond of ridiculing liberals for being “snowflakes” when it comes to other free speech matters, such as free speech on college campuses. Many times they are right. But one has to ask who the real snowflakes are when conservative lawmakers are calling on regulatory bureaucracies to reorder speech on private platform based on the mythical fear of not getting “fair” treatment. One also cannot help but wonder if those conservatives have thought through how this new Internet regulatory regime will play out once a more liberal administration takes back the reins of power. Conservatives will only have themselves to blame when the Speech Police come for them.


Addendum: Several folks have pointed out another irony associated with Hawley’s bill is that it would greatly expand the powers of the administrative state, which conservatives already (correctly) feel has too much broad, unaccountable power. I should have said more on that point, but here’s a nice comment from David French of National Review, which alludes to that problem and then ties it back to my closing argument above: i.e., that this proposal will come back to haunt conservatives in the long-run:

when coercion locks in — especially when that coercion is tied to constitutionally suspect broad and vague policies that delegate immense powers to the federal government — conservatives should sound the alarm. One of the best ways to evaluate the merits of legislation is to ask yourself whether the bill would still seem wise if the power you give the government were to end up in the hands of your political opponents. Is Hawley striking a blow for freedom if he ends up handing oversight of Facebook’s political content to Bernie Sanders? I think not.

Additional thoughts on the Hawley bill:

Josh Wright

Daphne Keller

Blake Reid

TechFreedom

Josh Blackman

Sen. Ron Wyden

Jeff Kosseff

Eric Goldman

CCIA

NetChoice

Internet Association

David French at National Review

John Samples

]]>
https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/feed/ 1 76507
Three Short Responses To The Pacing Problem https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/ https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/#respond Tue, 27 Nov 2018 17:16:38 +0000 https://techliberation.com/?p=76419

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that , “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption , Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered , “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses.

Technological Determinism

Part of what drives the worry about a pacing problem is rooted in a belief in technological determinism . Determinism aligns human actors and technological objects in a causal relationship. Technology acts on society as an outside force. In this view of the world, technology is separate from society and thus can advance by leaps and bounds before society and regulation can catch up. In other words, technology is made an independent variable with acts upon us all.

Yet, that doesn’t describe the world in which technological objects are created and sustained. The iPhone was created by Apple following the success of the iPod in melding the hardware platform with the content of the mobile web, ultimately for the purpose of boosting sales. And people became enamored with it, lining up days before its release to grab one. Technologies aren’t alien objects. They are molded by particular interests and institutional goals, and rooted in society, especially the bourgeois virtues.

Technologies exist within human ecology, just as economic systems do. To make technology an outside force misplaces the role of human values in the creation and adoption of innovation. As separated from society, determinism allows for technology to be both mythologized and demonized. Technologies cannot outpace our ability to adapt. Rather, the speed of change, of innovation, is rate limited by society’s ability to adapt. As Robin Hanson explained , “society’s ability to adapt is the primary constraint on how fast we adopt new technologies.”

The Technological Accident

The pacing problem also gains purchase because new technologies create the possibility for new accidents. As philosopher Paul Virilio wrote ,

To invent the sailing ship or the steamer is to invent the shipwreck. To invent the train is to invent the rail accident of derailment. To invent the family automobile is to produce the pile-up on the highway.

Every newly created technology comes with the potential for problems. So the possibility set for accidents increases dramatically when a new technology comes onto the scene. But it isn’t the case that all of those risk will be manifested. Only a subset of potential problems will ever become realized. As such, it isn’t is that social and regulatory responses systems need to have all answers. Rather, there needs to be in place flexible systems to deal with actualized issues.      

Regulation as a Real Option

Perhaps, however, we have been thinking about the pacing problem incorrectly. Maybe the pacing problem isn’t a problem as much as it is a reflection of uncertainty. Again, Vivek Wadhwa pithilty explained this problem, saying, “We haven’t come to grips with what is ethical, let alone with what the laws should be , in relation to technologies such as social media.” Consider that phrase I have highlighted. There is little agreement as to how we should regulate social media. In other words, there is regulatory uncertainty. The concept of real option might help make sense of this.

Real options are the investment choices that a company’s management will makes in order “to expand, change or curtail projects based on changing economic, technological or market conditions.” While originally used in strictly financial terms, economists Avinash Dixit and Robert Pindyck have adapted this concept to understand how firms invest, or not, in the face of regulatory uncertainty. As you read this paragraph from the first chapter of their book on the subject , replace the term investment with regulation and see what you think,  

Most investment decisions share three important characteristics it varying degrees. First, the investment is partially or completely irreversible. In other words, the initial cost of investment is at least partially sunk; you cannot recover it all should you change your mind. Second, there is uncertainty over the future rewards from the investment. The best you can do is to assess the probabilities of the alternative outcomes that can mean greater or smaller profit (or loss) for your venture. Third, you have some leeway about the timing of your investment. You can postpone action to get more information (but never, of course, complete certainty) about the future.   

There are strong corollaries. First, most regulatory decisions are difficult to reverse. It is rare for regulations to be stricken from the books, and even if they are, the affected industries are often impacted in more subtle ways. Second off, the potential benefits from a regulatory action are uncertain as Wadhwa pointed out. And finally, government bodies do have some leeway about the timing of their regulatory. Putting all of this together then, regulation might be thought of as a real option.

As economists Bronwyn H. Hall and Beethika Khan explained ,  

The most important thing to observe about this kind of [investment] decision is that at any point in time the choice being made is not a choice between adopting and not adopting but a choice between adopting now or deferring the decision until later.

In the same way, government regulation isn’t about regulating now or not regulating at all, but about regulating now or deferring the decision until later. That sounds a lot to me like the pacing problem.  

]]>
https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/feed/ 0 76419
Book Review: Cathy O’Neil’s “Weapons of Math Destruction” https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/ https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/#comments Wed, 07 Nov 2018 17:01:28 +0000 https://techliberation.com/?p=76408

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism , O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making.

For example, O’Neil begins her book with a story about Sarah Wysocki, a teacher who got fired from the D.C. public school system because of how the teacher evaluation system ranked her abilities. O’Neil writes,

Yet at the end of the 2010-11 school year, Wysocki received a miserable score on her IMPACT evaluation. Her problem was a new scoring system known as value-added modeling, which purported to measure her effectiveness in teaching math and language skills. That score, generated by an algorithm, represented half of her overall evaluation, and it outweighed the positive reviews from school administrators and the community. This left the district with no choice but to fire her, along with 205 other teachers who has IMPACT scores below the minimal threshold.

In the ensuing pages, O’Neil describes the scoring system, how it was designed, and how it affected Wysocki. But the broader politics behind the scoring system that ousted Wysocki are just as important.

Why, for example, was the value-added score such a prominent feature in the teacher evaluation as compared to administrative and parent input? Well, research from the Bill & Melinda Gates Foundation found that a teacher’s value-added track record is among the strongest predictors of student achievement gains. So, the school district changed around their evaluations to make it a central feature. As Jason Kamras, chief of human capital for D.C. schools, told the Washington Post , “We put a lot of stock in it.” But that decision wasn’t without its critics, including Washington Teachers’ Union President Nathan Saunders who said, “You can get me to walk down the road with you to say value-added is relevant, but 50 percent is too weighted.”

Moreover, the weights changed in 2009 because the Chancellor of D.C. public schools, Michelle Rhee, had negotiated a new deal with the teachers union. In exchange for 20 percent pay raises and bonuses of $20,000 to 30,000 for effective teachers, the district was given more leeway to fire teachers for poor performance, which they did using the IMPACT system. In part, this fight was spurred on because Obama-era Education Secretary Arne Duncan was doling out $3.4 billion in Race to the Top grants that focused on teacher effectiveness measures. Moreover, Rhee was a Chancellor because D.C. Mayor Adrian Fenty had passed legislation that would bypass the Board of Education and give him control of the schools.           

Yes, Wysocki might have been a false positive, but what about all of the poor performing teachers that the previous system hadn’t let go? By focusing on the teachers, O’Neil steers the conversation away from what should be the central concern, did the change actually help students learn and achieve?

Truth be told, my quibbles with Weapons of Math Destruction fit into two types. The first class relates to questions of emphasis and scope, which become important when the reader tallies off the costs and benefits of algorithms. Perhaps it is the case that “The U.S. News college ranking has great scale, inflicts widespread damage, and generates an almost endless spiral of destructive feedback loops.” But on the other hand, lower ranked colleges have decreased their net tuition and accepted a larger share of applicants. Yes, credit scores “open doors for some of us, while slamming them in the face of others,” but in which proportion? In Chile, for example, credit bureaus were forced to stop reporting defaults in 2012. The change was found to reduce the costs for most of the poorer defaulters, but raised the costs for non-defaulters, leading to a 3.5 percent decrease in lending and a reduction in aggregate welfare. It could be case that “the payday loan industry operates WMDs,” but it is unclear where low-income Americans will find short-term loans if they are outlawed.

Second, Weapons of Math Destruction continuously toys with important questions regarding the moral agency of technologies but never explicitly lays them out. How much value should be ascribed to technologies? To what degree are technologies value-neutral or value-laden? All technologies, including the algorithms that O’Neil describes, are designed and implemented for certain kinds of instrumental outcomes by companies and government agencies. An institution has to take on the task on adopting an algorithm for decision-making purposes, and thus, the algorithm reflects the institutional goals.

Should the algorithm be blamed, the institutional structures that put it into place, or some combination of the both? Reading with a careful eye, one will easily see that this is the fundamental question of the book, especially since O’Neil wonders whether “we’ve eliminated human bias or simply camouflaged it with technology.” But the real answer isn’t in this binary. Algorithmic problems are pluralist.

]]>
https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/feed/ 1 76408
Some data on wireless networks and cancer rates https://techliberation.com/2018/11/06/some-data-on-wireless-networks-and-cancer-rates/ https://techliberation.com/2018/11/06/some-data-on-wireless-networks-and-cancer-rates/#comments Tue, 06 Nov 2018 18:33:17 +0000 https://techliberation.com/?p=76401

By Brent Skorup and Trace Mitchell

An important benefit of 5G cellular technology is more bandwidth and more reliable wireless services. This means carriers can offer more niche services, like smart glasses for the blind and remote assistance for autonomous vehicles. A Vox article last week explored an issue familiar to technology experts: will millions of new 5G transmitters and devices increase cancer risk? It’s an important question but, in short, we’re not losing sleep over it.

5G differs from previous generations of cellular technology in that “densification” is important–putting smaller transmitters throughout neighborhoods. This densification process means that cities must regularly approve operators’ plans to upgrade infrastructure and install devices on public rights-of-way. However, some homeowners and activists are resisting 5G deployment because they fear more transmitters will lead to more radiation and cancer. (Under federal law, the FCC has safety requirements for emitters like cell towers and 5G. Therefore, state and local regulators are not allowed to make permitting decisions based on what they or their constituents believe are the effects of wireless emissions.)

We aren’t public health experts; however, we are technology researchers and decided to explore the telecom data to see if there is a relationship. If radio transmissions increase cancer, we should expect to see a correlation between the number of cellular transmitters and cancer rates. Presumably there is a cumulative effect: the more cellular radiation people are exposed to, the higher the cancer rates.

From what we can tell, there is no link between cellular systems and cancer. Despite a huge increase in the number of transmitters in the US since 2000, the nervous system cancer rate hasn’t budged.  In the US the number of wireless transmitters have increased massively–300%–in 15 years. (This is on the conservative side–there are tens of millions of WiFi devices that are also transmitting but are not counted here.)

But the US cancer rate is the dog that didn’t bark. In that same span of time, the type of cancers you would expect if cellphones pose a cancer risk–brain and nervous systems–have remained flat. If anything, as the NIH has said, these cancer rates have fallen slightly.

It’s a seeming paradox: In the US there was an introduction of 300,000 fairly powerful cell transmitters and hundreds of millions of (lower-power) devices that transmit signals through the air twenty four hours per day, seven days per week, every day of the year, yet these transmissions have no apparent effect on cancer rates.

The fear of 4G and 5G transmitters is due to a common misunderstanding about radiation. Significant exposure to ionizing radiation , the kind put off by X-rays and ultraviolet light, does have the potential to cause cancer. However, as the Vox article and other experts point out, cellular systems and devices don’t put off ionizing radiation. Tech devices emit a form of non-ionizing radiation , the type of radiation you receive from the visible light that bounces off, say, a book you hold in your hand. Unlike ionizing radiation, this non-ionizing radiation is too weak to alter DNA.

More research would be welcomed. The Vox article notes that much of the wireless system-cancer research is low-quality. Further, while wireless systems don’t seem to cause DNA damage there may be other effects on cells. A very focused wireless transmission from inches away can excite molecules and raise the temperature–this is how a microwave oven works–so it might be a good idea to keep your cellphone on your desk, not in your pocket, when possible. In the end, however, resist the technopanic–we don’t see much to be concerned about.

]]>
https://techliberation.com/2018/11/06/some-data-on-wireless-networks-and-cancer-rates/feed/ 2 76401
The Many Forms of Entrepreneurialism https://techliberation.com/2018/08/31/the-many-forms-of-entrepreneurialism/ https://techliberation.com/2018/08/31/the-many-forms-of-entrepreneurialism/#respond Fri, 31 Aug 2018 14:16:14 +0000 https://techliberation.com/?p=76367

by Adam Thierer & Trace Mitchell

[originally published on The Bridge on August 30, 2018.]


What is an entrepreneur?

While it may seem straightforward, this question is deceptively complex. The term can be used in many different ways to describe a variety of individuals who engage in economic, political, or even social activities. Entrepreneurs affect almost every aspect of modern society. While most people probably have a general sense of what is meant when they hear the term entrepreneur, it can be difficult to provide a precise definition. This is due in no small part to the fact that some of the primary thinkers who have given substance to the term have placed their focus on different aspects of entrepreneurialism.

How Economists Talk About Entrepreneurs

Austrian economist Joseph Schumpeter thought that the purpose of an entrepreneur was “to reform or revolutionize the pattern of production by exploiting an invention.”  Schumpeterian entrepreneurs are highly creative, disruptive innovators who challenge the status quo in order to bring about new economic opportunities. American economist Israel Kirzner viewed the defining characteristic of entrepreneurs as “alertness.” Kirznerian entrepreneurs are individuals who are able to identify the ways in which a market could be moved closer to its equilibrium, such as recognizing a gap in knowledge between different economic actors.

In the time since Schumpeter and Kirzner helped lay the groundwork, a number of George Mason University-affiliated scholars have made major contributions to our understanding of entrepreneurialism. Don BoudreauxJerry Ellig and Daniel Lin, and Virgil Storr, Stefanie Haeffele and Laura Grube, have offered a merged view of Schumpeterian and Kirznerian entrepreneurialism, showing the significant overlap between the two approaches.

In this new way of looking at the issue, entrepreneurs are crucial to innovation, economic growth, and societal change. They are dynamic actors who respond to incentives and market signals. “Greater discovery and innovation are the benchmarks of dynamic competition,” note Ellig and Lin, “not the driving down of price to marginal cost.”

Productive and Unproductive Entrepreneurs

But are all of these dynamic entrepreneurs good for society? Among modern economists and political scientists, there is a general consensus that Schumpeterian-Kirznerian entrepreneurs are individuals who either find or create value within society. In recent decades, therefore, scholars have focused on applying those insights more broadly and developing a more robust way to categorize different types of entrepreneurial activity.

Another American economist, William Baumol, drew an important distinction between  productive and unproductive entrepreneurs. He described productive entrepreneurs as people engaged in enterprising activity that generates value within society, such as the creation of new and innovative technologies. However, he also found that entrepreneurs could be unproductive if they did not create value or actively harmful if they destroyed value. “Indeed, at times the entrepreneur may even lead a parasitical existence that is actually damaging to the economy.” For Baumol, entrepreneurs are not defined as individuals who develop new methods of creating value but rather “persons who are ingenious and creative in finding ways that add to their own wealth, power, and prestige.”

Entrepreneurs in the Political Arena

An individual who is highly skilled at lobbying a particular governmental agency might be considered an entrepreneur, but that does not mean they are necessarily contributing value to society overall. Some scholars refer to this as political entrepreneurialism. Economists Peter Boettke and Christopher Coyne define political entrepreneurs as, “individuals who operate in political institutions and who are alert to profit opportunities created by those institutions.” Utah State University professors Randy Simmons, Ryan Yonk and Diana Thomas observe how such entrepreneurs seek specific rewards or privileges from political institutions and interactions through “alertness to previously unnoticed rent-seeking opportunities.” ‘Rent-seeking’ is an economic concept where one person or group is able to derive certain benefits from a particular institutional arrangement without actually creating value for others.

Our Mercatus Center colleague Matthew Mitchell has documented the “long list of privileges that governments occasionally bestow upon particular firms or particular industries.” Mitchell offers a taxonomy of the sort of privileges that political entrepreneurs seek. They include: “monopoly status, favorable regulations, subsidies, bailouts, loan guarantees, targeted tax breaksprotection from foreign competition, and noncompetitive contracts.”

All of these privileges could qualify as a form of Baumol’s “unproductive entrepreneurship” or, in the extreme, what he called destructive entrepreneurialism. Professors Sameeksha Desai, Zoltan Acs and Utz Weitzel define destructive entrepreneurship as “wealth-destroying (such as the destruction of inputs for production activities).” Whereas unproductive entrepreneurship “seeks to redistribute from one individual to another individual,” Boettke and Coyne note, “destructive entrepreneurship reduces the total surplus in an attempt by the entrepreneur to increase his own wealth.” Outright theft and violent conflict over resources are examples of destructive entrepreneurship.

When policymakers reward political destructive or unproductive entrepreneurs, it has profound effects on the well-being of ordinary people and entire nations.

Evasive and Regulatory Entrepreneurs

Not all political entrepreneurs are necessarily out to gain privileges from government at the expense of others, however. Some entrepreneurs are more interested in simply gaining greater freedom to innovate. Scholars have used the terms evasive entrepreneurs or regulatory entrepreneurs to describe such actors. Researchers Niklas Elert and Magnus Henrekson define evasive entrepreneurialism as “profit-driven business activity in the market aimed at circumventing the existing institutional framework by using innovations to exploit contradictions in that framework.” GMU economists Christopher Coyne and Peter Leeson argue that “[e]vasive activities include the expenditure of resources and efforts in evading the legal system or in avoiding the unproductive activities of other agents.” Regulatory entrepreneursaccording to legal scholars Elizabeth Pollman and Jordan Barry, are innovators who “are in the business of trying to change or shape the law” and are “strategically operating in a zone of questionable legality or breaking the law until they can (hopefully) change it.”  Evasive or regulatory entrepreneurs generally adopt a “permissionless innovation” approach to both business and political activities.

Generally speaking, evasive and regulatory entrepreneurs are synonymous, although regulatory entrepreneurialism implies a more active intent to change policy through entrepreneurial acts. Evasive entrepreneurs might also be ignorant of what the law says, whereas regulatory entrepreneurs, by definition, understand how the law negatively affects their efforts and seek to change policy through their actions.

However, both evasive and regulatory entrepreneurs are distinct from what economists Alexandre Padilla and Nicolas Cachanosky call indirectly productive entrepreneurs. They argue that regulation often creates unintended consequences which lead to new entrepreneurial opportunities. Indirectly productive entrepreneurs seize upon these opportunities by finding ways to mitigate the costs associated with specific regulations. Unlike regulatory entrepreneurs, who desire to change policy, or evasive entrepreneurs, who seek to avoid it, indirectly productive entrepreneurs create value by reducing the harm caused by policies. For example, the Transportation Safety Administration (TSA) has a policy prohibiting passengers from bringing liquids on an airplane unless they are kept in a container that is smaller than 3.4 ounces. As a response, several indirectly productive entrepreneurs have created “TSA Approved” containers for shampoo, mouthwash, and other toiletries that make it easier for passengers to comply with the regulation.

Social Entrepreneurs

There is also a growing acknowledgment that entrepreneurial behavior can transcend economic or political activities. Mercatus scholars have defined social entrepreneurs as individuals who engage in “innovative, social value-creating activity that can occur within or across the nonprofit, business, or government sectors.”  Social entrepreneurial activities are not typically in pursuit of compensation or profit, but that need not always be the case and “the distinction between social and commercial entrepreneurship is not dichotomous, but… a continuum ranging from purely social to purely economic,” they note.

Some sort of social mission drives this type of entrepreneurship, and social entrepreneurialism will often incorporate what MIT economist Eric von Hippel refers to as “free innovation.” He defines a free innovation as “a functionally novel product, service, or process that (1) was developed by consumers at private cost during their unpaid discretionary time (that is, no one paid them to do it) and (2) is not protected by its developers, and so is potentially acquirable by anyone without payment—for free.”  A good example of free innovation would be social entrepreneurs using 3D printers and open source designs to voluntarily create prosthetics for children with limb deficiencies.

Conclusion

As this brief survey reveals, there are many different forms of entrepreneurialism. Individuals can act in an entrepreneurial fashion in pursuit of many different objectives: profits, fame, social or legal change, or even personal or organizational privileges that come at the expense of others. Clearly, not all forms of entrepreneurialism produce socially beneficial outcomes. Policymakers should seek to foster and reward Schumpeterian-Kirznerian entrepreneurs given the positive implications for innovation and economic growth and avoid falling into the trap of rewarding political entrepreneurs, who instead seek to game laws and regulations to their own advantage.

Given the extensive research and academic literature inherent to this subject, we’ve curated a list of selected readings below.

 


Further Reading

Austin, J., Stevenson, H., & Wei-Skillern, J. (2006). Social and Commercial Entrepreneurship: Same, Different, or Both?  Entrepreneurship Theory and Practice, 30(1), 370-384. Retrieved from https://onlinelibrary.wiley.com/doi/full/10.1111/j.1540-6520.2006.00107.x

Baumol, W. (1968). Entrepreneurship in Economic Theory.  The American Economic Review,58(2), 64-71. Retrieved from https://www.jstor.org/stable/1831798?seq=1#page_scan_tab_contents

Baumol, W. (1990). Entrepreneurship: Productive, Unproductive and Destructive.  Journal of Political Economy, 98(5), 893-921. Retrieved from https://www.jstor.org/stable/2937617?seq=1#page_scan_tab_contents.

Boettke, P. J., & Coyne, C. J. (2009). Context Matters: Institutions and Entrepreneurship.  Foundations and Trends in Entrepreneurship, 5(3), 135-209. Retrieved from https://www.nowpublishers.com/article/Details/ENT-018.

Boudreaux, D. (1994), Schumpeter and Kirzner on Competition and Equilibrium. In P. Boetkke & D. Prychitko (Eds.),  The Market Process: Essays in the Contemporary Austrian Economics (pp. 52-61). Cheltenham, UK: Edward Elgar. Retrieved from http://cafehayek.com/wp-content/uploads/2011/02/Heres-a-paper-that-I-wrote-back-in-1986-or-1987.-In-it-I-attempt-to-explain-how-non-price-competition-can-be-equilibrating..pdf

Coyne, C. J., & Leeson, P. T. (2004). The Plight of Underdeveloped Countries.  Cato Journal, 24(3), 235-249. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=869123

Desai, S., & Acs, Z. J. (2007). A theory of destructive entrepreneurship. Jena Economic Research Papers no. 85, Friedrich-Schiller University and Max Planck Institute of Economics, Jena, Germany, October, Retrieved from https://www.econstor.eu/bitstream/10419/25657/1/553834517.PDF

Dees, J. G. (2001), ‘The meaning of Social Entrepreneurship’. The Fuqua School of Business, Center for the Advancement of Social Entrepreneurship.

Desai, S., Acs, Z.J., and Weitzel, U. (2013), “A model of destructive entrepreneurship: insight for conflict and post-conflict recovery,” Journal of Conflict Resolution, Vol. 57, No. 1, pp. 20–40, Retrieved from https://repository.ubn.ru.nl/bitstream/handle/2066/170796/170796.pdf

Elert, N. & Henrekson, M. (2016). Evasive Entrepreneurialism.  Small Business Economics, 47(1), 95-113. Retrieved from http://www.ifn.se/wfiles/wp/wp1044.pdf.

Ellig, J. & Lin, D. (2001). A Taxonomy of Dynamic Competition Theories. In J. Ellig (Ed.),  Dynamic Competition and Public Policy: Technology, Innovation, and Antitrust Issues (pp. 16-44)Cambridge: Cambridge University Press. Retrieved from https://www.cambridge.org/core/books/dynamic-competition-and-public-policy/taxonomy-of-dynamic-competition-theories/C536918DD453ADB34A47F48EDA6D21B7.

Hippel, E. V. (2017).  Free Innovation. Cambridge, MA: The MIT Press. Retrieved from https://mitpress.mit.edu/books/free-innovation.

Kirzner, I. M. (2009). The Alert and Creative Entrepreneur: A Clarification.  Small Business Economics, 32(2), 145-152. Retrieved from https://link.springer.com/article/10.1007/s11187-008-9153-7

Lucas, D. S. & Fuller, C. S. (2015). Entrepreneurship: Productive, Unproductive, and Destructive—Relative to What?  Journal of Business Venturing Insights, 7, 45-49. Retrieved from https://www.sciencedirect.com/science/article/pii/S2352673417300033.

Mitchell, M. D. (2012). The Pathology of Privilege: The Economic Consequences of Government Favoritism.  Mercatus Center. Retrieved from https://www.mercatus.org/publication/pathology-privilege-economic-consequences-government-favoritism.

Murphy, K.M., Shleifer, A. and Vishny, R.W. (1991) “The Allocation of Talent: Implications for Growth,” The Quarterly Journal of Economics, 106(2): 503-530. Retrieved from http://www.nber.org/papers/w3530

Murphy, K.M., Shleifer, A. and Vishny, R.W. (1993) “Why is rent-seeking so costly to growth?” American Economic Review Papers and Proceedings, 83 (2): 409-414. Retrieved from https://scholar.harvard.edu/shleifer/publications/why-rent-seeking-so-costly-growth

Padilla, A. & Cachanosky, N. (2016). Indirectly Productive Entrepreneurship.  Journal of Enterprise and Public Policy, 5(2), 161–175. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2584741.

Pollman, E. & Barry, J. M. (2017). Regulatory Entrepreneurship.  Southern California Law Review, 90, 383-448. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2741987.

Schumpeter, J. (1942, 2008).  Capitalism, Socialism and Democracy (3rd ed.). New York, NY: HarperCollins Publishers. Retrieved from https://www.amazon.com/Capitalism-Socialism-Democracy-Joseph-Schumpeter/dp/0061561614.

Simmons, R. T., Yonk, R. M., & Thomas, D. W. (2011). Bootleggers, Baptists, and Political Entrepreneurs: Key Players in the Rational Game and Morality Play of Regulatory Politics.  The Independent Review, 15(3), 367-381. Retrieved from http://www.independent.org/pdf/tir/tir_15_03_3_simmons.pdf.

Storr, V., Haeffele, S., & Grube, L. (2015). The Entrepreneur as a Driver of Social Change. In  Community Revival in the Wake of Disaster (pp. 11-31) New York, NY: Palgrave Macmillan. Retrieved from https://www.palgrave.com/us/book/9781137286086

Thierer, A. (2018). Evasive Entrepreneurialism and Technological Civil Disobedience: Basic Definitions,  The Bridge. Retrieved from https://www.mercatus.org/bridge/commentary/evasive-entrepreneurialism-and-technological-civil-disobedience-basic-definitions

Thierer, A. (2016).  Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Retrieved from https://www.mercatus.org/publication/permissionless-innovation-continuing-case-comprehensive-technological-freedom

Thierer, A. (2017). You’re in Joseph Schumpeter’s economy now.  Learn Liberty, Retrieved from http://www.learnliberty.org/blog/youre-in-joseph-schumpeters-economy-now

]]>
https://techliberation.com/2018/08/31/the-many-forms-of-entrepreneurialism/feed/ 0 76367
Infrastructure Control as Innovation Regulation https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/ https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/#comments Fri, 10 Aug 2018 20:28:51 +0000 https://techliberation.com/?p=76343

The ongoing ride-sharing wars in New York City are interesting to watch because they signal the potential move by state and local officials to use infrastructure management as an indirect form of innovation control or competition suppression. It is getting harder for state and local officials to defend barriers to entry and innovation using traditional regulatory rationales and methods, which are usually little more than a front for cronyist protectionism schemes. Now that the public has increasingly enjoyed new choices and better services in this and other fields thanks to technological innovation, it is very hard to convince citizens they would be better off without more of the same.

If, however, policymakers claim that they are limiting entry or innovation based on concerns about how disruptive actors supposedly negatively affect local infrastructure (in the form of traffic or sidewalk congestion, aesthetic nuisance, deteriorating infrastructure, etc.), that narrative can perhaps make it easier to sell the resulting regulations to the public or, more importantly, the courts. Going forward, I suspect that this will become a commonly-used playbook for many state and local officials looking to limit the reach of new technologies, including ride-sharing companies, electric scooters, driverless cars, drones, and many others.

To be clear, infrastructure control is both (a) a legitimate state and local prerogative; and (b) something that has been used in the past to control innovation and entry in other sectors. But I suspect that this approach is about to become far more prevalent because a full-frontal defense of barriers to innovation is far more likely to face serious public and legal challenges. For example, limiting ride-sharing competition in NYC on the grounds that it hurts local taxi cartels is unappealing to citizens and the courts alike. So, NYC is now making it all about traffic congestion. Even if that regulatory rationale is bunk, it is a much harder narrative to counter in the court of public opinion or the courts of law. For that reason, we can expect more and more state and local governments to just flip the narrative about innovation regulation going forward in this fashion.

How should defenders of innovation and competition respond to state and local efforts to use infrastructure control as an indirect form of innovation regulation? First, call them out on it if it really is just naked protectionism by another name. Second, to the extent there may be something their asserted concerns about infrastructure problems, propose alternative solutions that do not freeze innovation and new entry outright. The best approach is to borrow a page out of Coase’s playbook and use smarter pricing and property rights solutions. Or perhaps use unique funding mechanisms for new and better infrastructure that could accommodate ongoing entry and innovation.

For example, my Mercatus colleague Salim Furth recently penned a column (“Let Private Companies Pay for More Bike Lanes”) in which he noted how the electric scooter company Bird has offered cities a dollar a day per scooter to help build protected bike lanes. In doing so, Furth notes, Bird is:

offering to enter the long tradition of private provision of public goods. The original subway lines were private. Private institutions have frequently built or maintained public parks. Radio broadcasts, a textbook example of a public good, are largely private in the US. Companies often provide public entertainment because they benefit from the attraction.

In a similar way, Uber has already supported usage-based road pricing to alleviate congestion.  We could imagine still other examples like this for emerging technology companies. Drone manufacturers could help create or pay for “aerial sidewalks” or easements so they can deliver goods more efficiently. Scooter and dockless bike companies could help pay for bike and scooter paths either directly or through promotional efforts. Driverless car fleet providers could help build or cover the cost of new parking garages or for road improvements that would help make autonomous systems work better in local communities.

That is the pro-consumer, pro-innovation path forward. Hopefully, state and local officials will embrace such forward-looking reform ideas instead of seeking to indirectly control new entry and competition under the guise of infrastructure management.

]]>
https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/feed/ 1 76343
The Definition of Technology Matters For Tech Policy And Growth https://techliberation.com/2018/08/02/the-definition-of-technology-matters-for-tech-policy-and-growth/ https://techliberation.com/2018/08/02/the-definition-of-technology-matters-for-tech-policy-and-growth/#comments Thu, 02 Aug 2018 19:22:52 +0000 https://techliberation.com/?p=76331

Dan Wang has a new post titled “How Technology Grows (a restatement of definite optimism)” and it is characteristically good. For tech policy wonks and policymakers, put it in your queue. The essay clocks in at 7500 words, but there’s a lot to glean from the piece. Indeed, he puts into words a number of ideas I’ve been wanting to write about. To set the stage, he begins first by defining what we mean by technology:

Technology should be understood in three distinct forms: as processes embedded into tools (like pots, pans, and stoves); explicit instructions (like recipes); and as process knowledge, or what we can also refer to as tacit knowledge, know-how, and technical experience. Process knowledge is the kind of knowledge that’s hard to write down as an instruction. You can give someone a well-equipped kitchen and an extraordinarily detailed recipe, but unless he already has some cooking experience, we shouldn’t expect him to prepare a great dish.

As he rightly points out, the United States has, for various reasons, set aside the focus on process knowledge. Where this is especially evident comes in our manufacturing base:

When firms and factories go away, the accumulated process knowledge disappears as well. Industrial experience, scaling expertise, and all the things that come with learning-by-doing will decay. I visited Germany earlier this year to talk to people in industry. One point Germans kept bringing up was that the US has de-industrialized itself and scattered its production networks. While Germany responded to globalization by moving up the value chain, the US manufacturing base mostly responded by abandoning production.

The US is an outlier among rich countries when it comes to manufacturing exports. It needs improvement.

Two comments on this.

First off, I couldn’t agree more with Dan’s emphasis on the localization of knowledge. Local knowledge networks made Silicon Valley what it is. By far the best dive into this topic is still Annalee Saxenian’s “Regional Advantage,” which charts the computer industry’s genesis in both Silicon Valley and along Boston’s Route 128. As she details throughout the book, the culture of work and the resulting firm structures in Silicon Valley differed significantly from those in Boston, giving it critical advantages to become the preeminent region of technology development.

When I read it a couple of years back, I highlighted the importance of regional knowledge hubs:

As a side comment, Saxenian mentions that many Silicon Valley workers far more rooted in the region than others. While the company man of the 1950s might move among the various arms of the firm to gain experience, which could be in different states, in the Valley, you would just move down the street. To me, that speaks volumes to the importance of regional knowledge hubs.

Without them, an industry can lose dominance.

Green tech is best and most recent example. Some have lamented that the US isn’t in the lead in producing photovoltaic tech, that we import too much of the stuff from China. Yet, China doesn’t have a labor or productivity advantage here. It comes down to scale and supply-chain management, according to research:

We find that the historical price advantage of a China-based factory relative to a U.S.-based factory is not driven by country-specific advantages, but instead by scale and supply-chain development. Looking forward, we calculate that technology innovations may result in effectively equivalent minimum sustainable manufacturing prices for the two locations. In this long-run scenario, the relative share of module shipping costs, as well as other factors, may promote regionalization of module-manufacturing operations to cost-effectively address local market demand. Our findings highlight the role of innovation, importance of manufacturing scale, and opportunity for global collaboration to increase the installed capacity of PV worldwide.

Second, Dan looks towards Germany as a model of high tech manufacturing, but there are some caveats. Most of Germany’s manufacturing prowess comes from small to medium sized firms called Mittelstand. And the reason that Mittelstand dominate seems to come from the cozy relationship German manufacturing has with the Fraunhofer Society for the Advancement of Applied Research, often just called the Fraunhofer Institutes. Sixty nine of these research institutes are scattered throughout Germany and work on applied optics, chemicals, high-speed dynamics, materials, and wind energy, just to name a few.

I haven’t done a deep dive yet into Dan’s writings to see if he has looked at this important link between research and output, but I hope he does. There is a lot to be learned from the German model and I am still hopeful that the lessons could be applied to US policy.

]]>
https://techliberation.com/2018/08/02/the-definition-of-technology-matters-for-tech-policy-and-growth/feed/ 2 76331
GDPR Compliance: The Price of Privacy Protections https://techliberation.com/2018/07/09/gdpr-compliance-the-price-of-privacy-protections/ https://techliberation.com/2018/07/09/gdpr-compliance-the-price-of-privacy-protections/#respond Tue, 10 Jul 2018 00:43:36 +0000 https://techliberation.com/?p=76312

In preparation for a Federalist Society teleforum call that I participated in today about the compliance costs of the EU’s General Data Protection Regulation (GDPR), I gathered together some helpful recent articles on the topic and put together some talking points. I thought I would post them here and try to update this list in coming months as I find new material. (My thanks to Andrea O’Sullivan for a major assist on coming up with all this.)

Key Points :

  • GDPR is no free lunch; compliance is very costly
      • All regulation entails trade-offs, no matter how well-intentioned rules are
      • $7.8 billion estimated compliance cost for U.S. firms already
      • Punitive fees can range from €20 million to 4 percent of global firm revenue
      • Vagueness of language leads to considerable regulatory uncertainty — no one knows what “compliance” looks like
      • Even EU member states do not know what compliance looks like: 17 of 24 regulatory bodies polled by Reuters said they were unprepared for GDPR
  • GDPR will hurt competition & innovation; favors big players over small
      • Google, Facebook & others beefing up compliance departments. (“ EU official, Vera Jourova: “They have the money, an army of lawyers, an army of technicians and so on.”)
      • Smaller firms exiting or dumping data that could be used to provide better, more tailored services
      • PwC survey found that 88% of companies surveyed spent more than $1 million on GDPR preparations, and 40% more than $10 million.
      • Before GDPR, half of all EU ad spend went to Google. The first day after it took effect, an astounding 95 percent went to Google.
      • In essence, with the GDPR, the EU is surrendering on the idea of competition being possible going forward
      • The law will actually benefit the same big companies that the EU has been going after on antitrust grounds. Meanwhile, the smaller innovators and innovations will suffer.

  • GDPR likely to raise costs to consumers, or diminish choice/quality
      • Consumers care about privacy, but they also care about choice, convenience, and low-cost services
      • The modern data-driven economy has given consumers access to an unparalleled cornucopia of information and services and it is remarkable how much of that content and how many of those services are offered to the public at no charge to them. That’s a real benefit.  
      • But if you take all the data out of the Data Economy, you won’t have much of an economy left
      • “Many organizations will pass these costs on to consumers either by erecting paywalls or forcing users to view more ads.”
      • Websites blacked out post GDPR: Instapaper, Los Angeles Times , Chicago Tribune (all Tronc- and Lee Enterprises-owned media platforms), A&E Networks websites.
      • “EU-only” web experience: stripped down websites without illustration or images. NPR and USA Today .
      • Washington Post is charging for a more expensive GDPR compliant subscription.
  • GDPR hurts global flow of information; worsens problem of data localization
    • Rules only allow data to move to jurisdictions that offer an adequate level of protection
    • Cloud computing? Cloud architects are building costly new infrastructure that can isolate and inspect EU data to ensure it is not “sent” to the wrong jurisdiction.
    • Another step toward a more “bordered” Internet
    • Likely to just create more walled gardens
    • Max Schrems: “Unfortunately data localization is probably the best solution right now. It’s not really a solution that appeals to me a lot, but I think we need data localization for other reasons anyways, like load times and so on.”
    • Roundabout way to impose tariffs? Data-based firms are largely external to EU.
  • GDPR doesn’t solve bigger problem of government access to data
    • EU Data Retention Directive: third parties must keep data for law enforcement for two years (passed after terrorist attacks).
    • EU member states often have no FISA-like body overseeing government wiretap requests. France and the UK have no court apparatus governing surveillance — instead issued directly by administrative bodies. In Germany, their FBI equivalent can install a “Federal Trojan” virus directly into third party platforms without their knowledge.
  • GDPR doesn’t really move the needle much in terms of real privacy protection
    • heavy-handed, top-down regulatory regimes don’t always accomplish their goals when it comes to privacy
    • what consumers need is new competitive options and privacy innovations
    • Unfortunately, the world won’t get the new choices we need if regulations like the GDPR essentially punish them with regulatory compliance costs that only the largest current incumbents can possibly absorb

Related Research & Articles :

]]>
https://techliberation.com/2018/07/09/gdpr-compliance-the-price-of-privacy-protections/feed/ 0 76312
4 Ways Technology Helped During Hurricanes Harvey and Irma (and 1 more it could have) https://techliberation.com/2017/09/14/4-ways-technology-helped-during-hurricanes-harvey-and-irma-and-1-more-it-could-have/ https://techliberation.com/2017/09/14/4-ways-technology-helped-during-hurricanes-harvey-and-irma-and-1-more-it-could-have/#comments Thu, 14 Sep 2017 15:25:34 +0000 https://techliberation.com/?p=76188

Hurricanes Harvey and Irma mark the first time two Category 4 hurricanes have made U.S. landfall in the same year. Currently the estimates are the two hurricanes have caused between $150 and $200 million in damages.

If there is any positive story within these horrific disasters, it is that these events have seen a renewed sense of community and an outpouring of support from across the nation. From the recent star studded Hand-in-Hand relief concert and JJ Watts Twitter fundraiser to smaller efforts by local marching bands and police departments in faraway states.

What has made these disaster relief efforts different from past hurricanes? These recent efforts have been enabled by technology that was unavailable during past disasters, such as Hurricane Katrina.

  1. Airbnb

Many people chose to evacuate once the paths and intensity of Hurricanes Irma and Harvey became clear. In fact, Hurricane Irma created the largest evacuation in US history. As a result, many hotels quickly filled.

Airbnb has been able to step in to allow local citizens to help in this situation by waiving its fees and encouraging owners to offer space free of charge to those displaced by the disasters. The website also makes it easy for evacuees to search and find available lodging. The service not only helps evacuees, but also volunteers and contractors coming to the area to help with recovery.

Additionally, the website was able to help authorities locate and communicate U.S. citizens who may have been in rented residences on Caribbean islands after the storm hit.

Licensing or other regulatory requirements could also limit what or which owners are able to offer in times of emergency preventing good Samaritans from being able to help. Regulations applying other lodging regulations, interpreting zoning laws, or outright bans on services like Airbnb could prevent this free service in the future. While Airbnb can waive its own fees, it would be unable to waive regulations from state or local governments allowing owners to offer their home. Often such regulations or enforcement attempts target hosts rather than companies like the zoning interpretation the city of Miami considered. If there are concerns about legality, individuals might be less likely to fill this void and help their neighbors or strangers through such services in times of crisis.

  1. Drones

The Red Cross called for volunteer drone pilots who had the necessary paperwork and authorization to operate in the impacted areas and for the first time in a one week test used drones to deliver and survey disaster relief needs in some of the hardest hit areas.

But delivering supplies is not the only way drones are able to assist with recovery efforts. Verizon and AT & T were able to use drones to determine if equipment was damaged and causing outages, and then respond accordingly. Similarly some insurers have been deploying drones to allow adjusters to view and assess heavily damaged areas sooner.

In the immediate aftermath prohibited private drones from flying in areas around Houston, still the agency issued some permits allowing drones to assist in locating those who are trapped and survey the damage.  There were many legal concerns to be considered in the initial aftermath and in the future use of drones including both property issues and concerns of interference. A less restrictive environment might have allowed drones to provide greater assistance sooner with a minimal risk of privacy invasion or interference.

  1. Tesla

Tesla issued an over the air update for additional battery life (an upgrade that is normally available for a fee) to provide owners the ability to evacuate following the preferred route. While some may have concerns that this power could be used negatively by the corporation, the success shows that over-the-air updates could be used to improve safety or other features in the future.

Additionally, one of the issues in any evacuation is traffic. The more cars on the road (particularly as weather worsens), the greater the risk of accidents. Assuming there is not too much precautionary interference, in the future self-driving cars could aid in making evacuation traffic safer and less stressful.

  1. Social media and messaging apps help connect neighbors and get help

Want help? There’s an app for that.

The Cajun Navy gained renown for rescuing neighbors in the Southern Louisiana floods, but the app Zello made becoming a member of it even easier during Hurricane Harvey. Similarly, the app allowed victims of the storms to share information as power went out using less bandwith then phone calls.

Traditional social media also played a role in search and rescue efforts. When 9-1-1 failed, those in need of help turned to Twitter and Facebook in some cases. Neighbors, friends, or even strangers could use the information to provide help when traditional responders were unavailable. So many people were relying on social media, the Coast Guard had to issue a comment requesting people call not tweet at them for rescue.

Social media certainly had problems with misinformation, but in recent disasters it has shown to be an important part of disaster response and preparedness.

The one that might have been….

Could Flytenow have provided a possible solution to some of the concerns of airline price-gouging in the wake of Hurricane Irma? Flytenow hoped to make flight sharing a reality for the masses, but was shutdown due to interpretations by the FAA regarding common carriers. There are limitations on flight sharing, however, in a crisis, it’s possible allowing this type of arrangement could have resulted in a greater number of flights available. If demand was high, available pilots planning their own evacuation might consider posting additional available seats for others in exchange for some share of the expense of the flight. The result likely would be more seats available and lower prices overall. Using a platform rather than a traditional bulletin board arrangement would allow a service to limit the availability to only those who are certified or otherwise shown to be competent to fly in difficult conditions. Perhaps Flytenow would even have provided some sort of good Samaritan program like Airbnb to help get flights to those most in need of evacuation. Still, because of regulatory precaution, at least for now, we will not know the potential impact flight sharing could have on assisting in such natural disasters.

Conclusion

Technology is changing the way we respond to disasters and assisting with relief efforts. As Allison Griswold writes at Quartz, this technology enabled response has redefined how people provide assistance in the wake of disaster. We cannot plan how such technology will react to difficult situations or the actions of such platforms users, but the recent events in Florida and Texas show it can enable us to help one another even more. The more technology is allowed to participate in a response, the better it enables people to connect to those in need in the wake of disaster.

]]>
https://techliberation.com/2017/09/14/4-ways-technology-helped-during-hurricanes-harvey-and-irma-and-1-more-it-could-have/feed/ 1 76188
Innovation Policy at the Mercatus Center: The Shape of Things to Come https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/ https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/#respond Tue, 11 Apr 2017 15:11:40 +0000 https://techliberation.com/?p=76133

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge.

Indeed, it isn’t easy keeping on top of all of these issues and threats because the only constant in the world of innovation policy — the study of technological change and its impact on social, economic, and political systems — is constant change. You go to sleep one night thinking you’ve got the world figured out, only to awake the next morning to see that another tectonic shift has reshaped the landscape.

In the industrial era, it was hard enough mapping the contours of this field of academic study. This task has grown far more challenging. Computing and Internet-enabled innovations have fundamentally reshaped society and have also helped spawn other technological revolutions in diverse fields such as: robotics, autonomous systems, artificial intelligence, big data, the Sharing Economy, 3D printing, virtual reality, aviation, advanced medical technology, blockchain and Bitcoin, and the so-called the Internet of Things.

The short-term social and economic disruptions caused by these and other new technologies often lead to backlashes and even occasional “techno-panics.” When those panics bubble over into the political arena, the risk is that misguided regulatory policies will short-circuit opportunities for creators and entrepreneurs to pursue life-enriching innovations.

At the Mercatus Center, where we study these and other topics, our goal is to bring greater focus to these emerging technologies and the many different facets of innovation policy surrounding them. How we accomplish these goals is as challenging as it is exciting. As more and more industries and business are affected by these emerging technologies, the decisions that policymakers make about them will have profound effects on large parts of our economy and society.

Specifically, as we place ourselves at the forefront of these debates, our aim is to:

  • Explore how innovation policy affects economic growth and mobility, consumer welfare, and global competitive advantage;
  • Identify barriers to entrepreneurial endeavors and devise a roadmap for how to remove them;
  • Push back against technopanics and overly-broad theories of “technological harm” that could limit innovation opportunities and greater consumer choice; and
  • Confront the legal and ethical concerns surrounding emerging technologies and craft constructive solutions to those problems to avoid solutions of the top-down, “command-and-control” variety.

Overall, our vision is simple: Permissionless innovation must become the norm rather than the exception. This means innovation and innovators are protected against efforts to preemptively control ongoing trial-and-error experimentation. We should let creative minds and empowered entrepreneurs experiment with new and better ways of doing things. It also means that the future if public policy should be rooted in fact-based analysis and not shaped by outlandish fears of hypothetical worst-case scenarios.

Going forward, you will continue to see Mercatus producing research applying permissionless innovation across a host of areas. You can also expect us to begin pursuing big questions about the future.

What if we could reduce the number of deaths on US roadways from 96 people per day to zero? What if we could double life expectancy? Triple it? Wouldn’t it be nice if we could travel from New York to London in three hours? New York to Los Angeles in 2.5 hours? What if we welcomed automation instead of fearing its effects on the workforce? What if we could remove the technical and political barriers keeping us from going to Mars and then beyond it? And so on.

We pose these questions not merely because they are intellectually interesting and important, but also because we hope to make the case for embracing the future with a sense of wonder and optimism about how technological advancement can radically improve human well-being in both the short- and long-run.

It isn’t enough to simply point out where innovators and entrepreneurs are being hindered. It isn’t enough to simply tell people that the future will be bright. We must explain, in real terms, how hindering innovation opportunities undermines our collective ability to constantly improve the human condition.

And because there is a symbiotic relationship between freedom and progress, we must defend our collective ability as a society to achieve very concrete, widely-shared advances in well-being through a general freedom to experiment with new technologies and better ways of doing things.

That is our vision for the Technology Policy Program at the Mercatus Center and we hope it is one that the public and public policymakers will embrace going forward.

]]>
https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/feed/ 0 76133
Senate Bill to Keep the Internet Free of Regulation https://techliberation.com/2016/02/26/senate-bill-to-keep-the-internet-free-of-regulation/ https://techliberation.com/2016/02/26/senate-bill-to-keep-the-internet-free-of-regulation/#comments Fri, 26 Feb 2016 15:56:32 +0000 https://techliberation.com/?p=75995

Yesterday, almost exactly one year after the FCC classified Internet service as a common carrier service, Sen. Mike Lee and his Senate cosponsors (including presidential candidates Cruz and Rubio) introduced the Restoring Internet Freedom Act. Sen. Lee also published an op-ed about the motivation for his bill, pointing out the folly of applying a 1930s AT&T Bell monopoly law to the Internet. It’s a short bill, simply declaring that the FCC’s Title II rules shall have no force and it precludes the FCC from enacting similar rules absent an act of Congress.

It’s a shame such a bill even has to be proposed, but then again these are unusual times in politics. The FCC has a history of regulating new industries, like cable TV, without congressional authority. However, enforcing Title II, its most intrusive regulations, on the Internet is something different altogether. Congress was not silent on the issue of Internet regulation, like it was regarding cable TV in the 1960s when the FCC began regulating.

Former Clinton staffer John Podesta said after Clinton signed the 1996 Telecom Act, “Congress simply legislated as if the Net were not there.” That’s a slight overstatement. There is one section of the Telecommunications Act, Section 230, devoted to the Internet and it is completely unhelpful for the FCC’s Open Internet rules. Section 230 declares a US policy of unregulation of the Internet and, in fact, actually encourages what net neutrality proponents seek to prohibit: content filtering by ISPs.

The FCC is filled with telecom lawyers who know existing law doesn’t leave room for much regulation, which is why top FCC officials resisted common carrier regulation until the end. Chairman Wheeler by all accounts wanted to avoid the Title II option until pressured by the President in November 2014. As the Wall Street Journal reported last year, the White House push for Title II “blindsided officials at the FCC” who then had to scramble to construct legal arguments defending this reversal. The piece noted,

The president’s words swept aside more than a decade of light-touch regulation of the Internet and months of work by Mr. Wheeler toward a compromise.

The ersatz “parallel version of the FCC” in the White House didn’t understand the implications of what they were asking for and put the FCC in a tough spot. The Title II rules and legal justifications required incredible wordsmithing but still created internal tensions and undesirable effects, as pointed out by the Phoenix Center and others. This policy reversal, to go the Title II route per the President’s request, also created First Amendment and Section 230 problems for the FCC. At oral argument the FCC lawyer disclaimed any notion that the FCC would regulate filtered or curated Internet access. This may leave a gaping hole in Title II enforcement since all Internet access is filtered to some degree, and new Internet services, like LTE Broadcast, Free Basics, and zero-rated video, involve curated IP content. As I said at the time, the FCC “is stating outright that ISPs have the option to filter and to avoid the rules.”

Nevertheless, Title II creates a permission slip regime for new Internet services that forces tech and telecom companies to invest in compliance lawyers rather than engineers and designers. Hopefully in the next few months the DC Circuit Court of Appeals will strike down the FCC’s net neutrality efforts for a third time. In any case, it’s great to see that Sen. Lee and his cosponsors have made innovation policy priority and want to continue the light-touch regulation of the Internet.

]]>
https://techliberation.com/2016/02/26/senate-bill-to-keep-the-internet-free-of-regulation/feed/ 1 75995
What Cory Booker Gets about Innovation Policy https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/ https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/#respond Mon, 16 Feb 2015 15:32:43 +0000 http://techliberation.com/?p=75460

Cory BookerLast Wednesday, it was my great pleasure to testify at a Senate Commerce Committee hearing entitled, “The Connected World: Examining the Internet of Things.” The hearing focused “on how devices… will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

But the session went well beyond the Internet of Things and became a much more wide-ranging discussion about how America can maintain its global leadership for the next-generation of Internet-enabled, data-driven innovation. On both sides of the aisle at last week’s hearing, one Senator after another made impassioned remarks about the enormous innovation opportunities that were out there. While doing so, they highlighted not just the opportunities emanating out of the IoT and wearable device space, but also many other areas, such as connected cars, commercial drones, and next-generation spectrum.

I was impressed by the energy and nonpartisan vision that the Senators brought to these issues, but I wanted to single out the passionate statement that Sen. Cory Booker (D-NJ) delivered when it came his turn to speak because he very eloquently articulated what’s at stake in the battle for global innovation supremacy in the modern economy. (Sen. Booker’s remarks were not published, but you can watch them starting at the 1:34:00 mark of the hearing video.)

Embrace the Opportunity

First, Sen. Booker stressed the enormous opportunity with the Internet of Things. “ This is a phenomenal opportunity for a bipartisan, profoundly patriotic approach to an issue that can explode our economy. I think that there are trillions of dollars, creating countless jobs, improving quality of life, [and] democratizing our society,” he said. “We can’t even imagine the future that this portends of, and we should be embracing that.”

Sen. Booker has it exactly right. And for more details about the enormous innovation opportunities associated with the Internet of Things, see Section 2 of my new law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation,” which provides concrete evidence.

Protect America’s Competitive Advantage in the Innovation Age

Second, Sen. Booker highlighted the importance of getting our policy vision right to achieve those opportunities. He noted that “a lot of my concerns are what my Republican colleagues also echoed, which is we should be doing everything possible to encourage this and nothing to restrict it.”

America right now is the net exporter of technology and innovation in the globe, and we can’t lose that advantage,” he said and “we should continue to be the global innovators on these areas.” He continued on to say:

And so, from copyright issues, security issues, privacy issues… all of these things are worthy of us wrestling and grappling with, but to me we cannot stop human innovation and we can’t give advantages in human innovation to other nations that we don’t have. America should continue to lead.

This is something I have been writing actively about now for many years and I agree with Sen. Booker that America needs to get our policy vision right to ensure we don’t lose ground in the international competition to see who will lead the next wave of Internet-enabled innovation. As I noted in my testimony, “If America hopes to be a global leader in the Internet of Things, as it has been for the Internet more generally over the past two decades, then we first have to get public policy right. America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation”—the idea that experimentation with new technologies and business models should generally be permitted without prior approval.”

Meanwhile, as I documented in my longer essay, “Why Permissionless Innovation Matters: Why does economic growth occur in some societies & not in others?” our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

Reject Fear-Based Policymaking

Third, and perhaps most importantly, Sen. Booker stressed how essential it was that we reject a fear-based approach to public policymaking. As he noted at the hearing about these new information technologies, “ there’s a lot of legitimate fears, but in the same way of every technological era, there must have been incredible fears.”

He cited, for example, the rise of air travel and the onset of humans taking flight. Sen. Booker correctly noted that while that must have been quite jarring at first, we quickly came to realize the benefits of that new innovation. The same will be true for new technologies such as the Internet of Things, connected cars, and private drones, Booker argued. In each case, some early fears about these technologies could lead to overly-precautionary approach to policy. “ But for us to do anything to inhibit that leap in humanity to me seems unfortunate,” he said.

Once again, the Senator has it exactly right. As I noted in my law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as my recent essay, “Muddling Through: How We Learn to Cope with Technological Change,” humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. More often than not, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Booker gets that and understands why we need to be patient to allow that process to unfold once again so that we can enjoy the abundance of riches that will accompany a more innovative economy.

Avoiding Global Innovation Arbitrage

Sen. Booker also highlighted how some existing government legal and regulatory barriers could hold back progress. On the wireless spectrum front he noted that “ the government hoards too much spectrum and there is a need for more spectrum out there. Everything we are talking about,” he argued, “is going to necessitate more spectrum.” Again, 100% correct. Although some spectrum reform proposals (licensed vs. unlicensed, for example) will still prove contentious, we can at least all agree that we have to work together to find ways to open up more spectrum since the coming Internet of Things universe of technologies is going to demand lots of it.

Booker also noted that another area where fear undermines American leadership is the issue of private drone use. He noted that, “ the potential possibilities for drone technology to alleviate burdens on our infrastructure, to empower commerce, innovation, jobs… to really open up unlimited opportunities in this country is pretty incredible to me.”

The problem is that existing government policies, enforced by the Federal Aviation Administration (FAA), have been holding back progress. And that has had consequences in terms of global competitiveness. “As I watch our government go slow in promulgating rules holding back American innovation,” Booker said, “what happened as a result of that is that innovation has spread to other countries that don’t have these rules (or have) put in place sensible regulations. But now we seeing technology exported from America and going other places.”

Correct again! I wrote about this problem in a recent essay on “global innovation arbitrage,” in which I noted how “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.”

That’s already happening with drone innovation, as I documented in that piece. Evidence suggests that the FAA’s heavy-handed and overly-precautionary approach to drones has encouraged some innovators to flock overseas in search of more hospitable regulatory environment.

Luckily, just this weekend, the FAA finally announced its (much-delayed) rules for private drone operations. (Here’s a summary of those rules.) Unfortunately, the rules are a bit of mixed bag, with some greater leeway being provided for very small drones, but the rules will still be too restrictive to allow for other innovative applications, such as widespread drone delivery (which has Amazon angry, among others.)

Bottom line: if our government doesn’t take a more flexible, light-touch approach to these and other cutting-edge technologies, than some of our most creative minds and companies are going to bolt.

I dealt with all of these innovation policy issues in far more detail in my latest little book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, which I condensed further still into this essay on, “Embracing a Culture of Permissionless Innovation.” But Sen. Booker has offered us an even more concise explanation of just what’s at stake in the battle for innovation leadership in the modern economy. His remarks point the way forward and illustrate, as I have noted before, that innovation policy can and should be a nonpartisan issue.

 


Additional Reading

 

]]>
https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/feed/ 0 75460
Permissionless Innovation & Commercial Drones https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/ https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/#comments Wed, 04 Feb 2015 23:20:57 +0000 http://techliberation.com/?p=75392

Farhad Manjoo’s latest New York Times column, “Giving the Drone Industry the Leeway to Innovate,” discusses how the Federal Aviation Administration’s (FAA) current regulatory morass continues to thwart many potentially beneficial drone innovations. I particularly appreciated this point:

But perhaps the most interesting applications for drones are the ones we can’t predict. Imposing broad limitations on drone use now would be squashing a promising new area of innovation just as it’s getting started, and before we’ve seen many of the potential uses. “In the 1980s, the Internet was good for some specific military applications, but some of the most important things haven’t really come about until the last decade,” said Michael Perry, a spokesman for DJI [maker of Phantom drones]. . . . He added, “Opening the technology to more people allows for the kind of innovation that nobody can predict.”

That is exactly right and it reflects the general notion of “permissionless innovation” that I have written about extensively here in recent years. As I summarized in a recent essay: “Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention or business model will bring serious harm to individuals, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”

The reason that permissionless innovation is so important is that innovation is more likely in political systems that maximize breathing room for ongoing economic and social experimentation, evolution, and adaptation. We don’t know what the future holds. Only incessant experimentation and trial-and-error can help us achieve new heights of greatness. If, however, we adopt the opposite approach of “precautionary principle”-based reasoning and regulation, then these chances for serendipitous discovery evaporate. As I put it in my recent book, “living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

In this regard, the unprecedented growth of the Internet is a good example of how permissionless innovation can significantly improve consumer welfare and our nation’s competitive status relative to the rest of the world. And this also holds lessons for how we treat commercial drone technologies, as Jerry Brito, Eli Dourado, and I noted when filing comments with the FAA back in April 2013. We argued:

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators. We therefore urge the FAA not to impose  any prospective restrictions on the use of commercial UASs without clear evidence of actual, not merely hypothesized, harm.

Manjoo builds on that same point in his new Times essay when he notes:

[drone] enthusiasts see almost limitless potential for flying robots. When they fantasize about our drone-addled future, they picture not a single gadget, but a platform — a new class of general-purpose computer, as important as the PC or the smartphone, that may be put to use in a wide variety of ways. They talk about applications in construction, firefighting, monitoring and repairing infrastructure, agriculture, search and response, Internet and communications services, logistics and delivery, filmmaking and wildlife preservation, among other uses.

If only the folks at the FAA and in Congress saw things this way. We need to open up the skies to the amazing innovative potential of commercial drone technology, especially before the rest of the world seizes the opportunity to jump into the lead on this front.

___________________________

Additional  Reading

]]>
https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/feed/ 4 75392
The 10 Most-Read Posts of 2014 https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/ https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/#comments Tue, 30 Dec 2014 16:36:34 +0000 http://techliberation.com/?p=75156

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy.

  1. New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts.

In July, Jerry Brito wrote about New York’s proposed framework for regulating digital currencies like Bitcoin.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.
  1. Google Fiber: The Uber of Broadband

In February, I noted some of the parallels between Google Fiber and ride-sharing, in that new entrants are upending the competitive and regulatory status quo to the benefit of consumers.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions.
  1. The Debate over the Sharing Economy: Talking Points & Recommended Reading

In September, Adam Thierer appeared on Fox Business Network’s Stossel show to talk about the sharing economy. In a TLF post, he expands upon his televised commentary and highlights five main points.

  1. CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?

After attending the 2014 Consumer Electronics Show in January, Adam wrote a prescient post about the promise of the Internet of Things and the regulatory risks ahead.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers…. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.
  1. Defining “Technology”

Earlier this year, Adam compiled examples of how technologists and experts define “technology,” with entries ranging from the Oxford Dictionary to Peter Thiel. It’s a slippery exercise, but

if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”
  1. The Problem with “Pessimism Porn”

Adam highlights the tendency of tech press, academics, and activists to mislead the public about technology policy by sensationalizing technology risks.

The problem with all this, of course, is that it perpetuates societal fears and distrust. It also sometimes leads to misguided policies based on hypothetical worst-case thinking…. [I]f we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon them—it means that best-case scenarios will never come about.
  1. Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600

Professor Mark T. Williams predicted in December 2013 that by mid-2014, Bitcoin’s price would fall to below $10. In mid-2014, Jerry commends Prof. Williams for providing, unlike most Bitcoin watchers, a bold and falsifiable prediction about Bitcoin’s value. However, as Jerry points out, that prediction was erroneous: Bitcoin’s 2014 collapse never happened and the digital currency’s value exceeded $600.

  1. What Vox Doesn’t Get About the “Battle for the Future of the Internet”

In May, Tim Lee wrote a Vox piece about net neutrality and the Netflix-Comcast interconnection fight. Eli Dourado posted a widely-read and useful corrective to some of the handwringing in the Vox piece about interconnection, ISP market power, and the future of the Internet.

I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless…. There is nothing unseemly about Netflix making … payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).
  1. Muddling Through: How We Learn to Cope with Technological Change

The second most-read TLF post of 2014 is also the longest and most philosophical in this top-10 list. Adam wrote a popular and in-depth post about the social effects of technological change and notes that technology advances are largely for consumers’ benefit, yet “[m]odern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.” The nature of human resilience, Adam explains, should encourage a cautiously optimistic view of technological change.

  1. Help me answer Senate committee’s questions about Bitcoin

Two days into 2014, Jerry wrote the most-read TLF piece of the past year. Jerry had testified before the Senate Homeland Security and Governmental Affairs Committee in 2013 as an expert on Bitcoin. The Committee requested more information about Bitcoin post-hearing and Jerry solicited comment from our readers.

Thank you to our loyal readers for continuing to visit The Technology Liberation Front. It was busy year for tech and telecom policy and 2015 promises to be similarly exciting. Have a happy and safe New Years!

]]>
https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/feed/ 1 75156
Government Surveillance: Is It Time for Another Church Committee? https://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/ https://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/#comments Wed, 17 Dec 2014 21:32:29 +0000 http://techliberation.com/?p=75085

This morning, a group of organizations led by the Center for Responsibility and Ethics in Washington (CREW), R Street, and the Sunlight Foundation released a public letter to House Speaker John Boehner and Minority Leader Nancy Pelosi calling for enhanced congressional oversight of U.S. national security surveillance policies.

The letter—signed by over fifty organizations, ranging from the Electronic Frontier Foundation, the Competitive Enterprise Institute, and the Brennan Center for Justice at the New York University School of Law, and a handful of individuals, including Pentagon Papers whistleblower Daniel Ellsberg—expresses deep concerns about the expansive scope and limited accountability of intelligence activities and agencies, famously exposed by whistleblower Edward Snowden in 2013. The letter states:

Congress is responsible for authorizing, overseeing, and funding these programs. In recent years, however, the House of Representatives has not always effectively performed its duties. The time for modernization is now. When the House convenes for the 114th Congress in January and adopts rules, the House should update them to enhance opportunities for oversight by House Permanent Select Committee on Intelligence (“HPSCI”) members, members of other committees of jurisdiction, and all other representatives. The House should also consider establishing a select committee to review intelligence activities since 9/11. We urge the following reforms be included in the rules package.

The proposed modernization reforms include:

1) modernizing HPSCI membership to more accurately reflect House interests by allowing chairs and ranking members of other committees with intelligence jurisdiction to select a designee on HPSCI;

2) allowing each HPSCI Member to designate a staff member of his or her choosing to represent their interests on the committee, as is the practice in the Senate;

3) making all unclassified intelligence reports quickly available to the public;

4) improving HPSCI the speed and transparency of responsiveness to member requests for information; and

5) improving general HPSCI transparency by better informing members of relevant activities like upcoming closed hearings, legislative markups, and committee activities

The groups also urge reforms to empower all members of Congress to be informed of and involved with executive intelligence agencies’ activities. They are:

1) making all communications from the executive branch available to all Members unless the sender explicitly indicates otherwise;

2) reaffirming Members’ abilities to access, review, and publicly discuss materials already available to the public that are classified by the executive branch, as is the case with the Snowden leaks. Members should feel comfortable to discuss this kind of information without fear of reprimand;

3) providing Members with at least one staff member with access to classified information through a Top Secret/Special Compartmented Information (TS/SCI) clearance;

4) allowing Members to speak with whistleblowers without fear of reprisal; and

5) improving training for Members and staff on how to handle classified information and conduct effective congressional oversight of classified matters.

Over at the CREW blogDaniel Schuman provides some more context of the problems these groups seek to address:

Members of Congress rely on staff to do a lot of work, but most staff working on intelligence issues are not permitted to hold the necessary security clearances to do their jobs. Sometimes, the Intelligence Committee in the House intercepts mail from the executive branch addressed to all members of Congress. That same committee sits on unclassified reports, refusing to make them available to the public. Briefings provided by the intelligence community are announced for inconvenient times, do not provide enough detailed information, and members of Congress often are not allowed to take notes on what was said. The executive branch has 666,000 employees with top secret/SCI clearance and 541,000 contractors with top secret/SCI clearance, and yet often times members of Congress are not permitted to talk with one another about their briefings. Members of Congress are not allowed to publicly speak about—and staff may not read—classified information that has been published in the newspaper or on the internet. This makes no sense for the deliberative body that was designed as a check on executive power.

While these proposed reforms aim to improve congressional oversight through common-sense changes or clarifications in House procedure and committee structure, these still only address failures of intelligence oversight that we have gleaned from our current knowledge of the byzantine maze of surveillance agency activities so far. The picture painted by the little knowledge that have right now is not pretty. An associated white paper presenting the reforms in more detail notes:

The last decade-and-a-half has witnessed major intelligence community failures. From the inability to connect the dots on 9/11 to false claims about weapons of mass destruction in Iraq, from the unlawful commission of torture to the inability to predict the Arab spring, from lying to Congress about the NSA to CIA surveillance of Senate staff, the intelligence community has a credibility gap. Moreover, with recent revelations about secret government activities, to the apparent surprise of many members of Congress, it is increasingly clear that Congress has not engaged in effective oversight of the intelligence community .

To get a fuller picture of the extent of the problem, the letter proposes that the House adopt a special committee to conduct a distinct, broad-based review of the activities of the intelligence community after 9/11. Similar committees have been assembled in the past to address previous shortcomings:

The last time so many revelations of government misdeeds came to light in news reports, Congress reacted by forming two special committees to investigate intelligence community activities. The reports by the Church and Pike Committees led to wholesale reforms of the intelligence community , including improving congressional oversight mechanisms. The magnitude of current revelations and intelligence community failures leads to this conclusion: the House (and Senate) must establish a distinct, broad-based review of the activities of the intelligence community since 9/11. The House should establish a committee modeled after the Church or Pike Committees, provide it adequate staffing and financial support, and give it a broad mandate to review intelligence community activities, engage in public reporting wherever possible, and issue recommendations for reform.

The Church and Pike Committees of the 1970’s were products of a decade of explosive revelations of government surveillance run amok. The white paper cites a 1974 New York Times exclusive report by Seymour Hersh that revealed the CIA had been operationalized to inspect the mail, telephone communications, and residences of tens of thousands of uncharged private citizens since the 1950’s. Earlier that year, allegations that the U.S. Army had been performing illegal surveillance of American citizens were verified and repudiated by Senator Sam Ervin’s Military Surveillance Investigations. In 1975, a bombshell NSA investigation published by the Times reported that the then largely-unknown intelligence unit “eavesdrops on virtually all cable, Telex, and other nontelephone communications leaving and entering the United States” and “uses computers to sort out and obtain intelligence from the contents” in the now-infamous Project Shamrock. The revealed executive abuses of the Nixon administration provided the cherry on top of a growing distrust and anger with surreptitious U.S. surveillance practices.

Today is another era of outrageous whitstleblower reports and rapidly dwindling trust in U.S. surveillance bodies. A mere 24 percent of Americans reported that they trust the government to “do the right thing” most of the time in 2013 Rasmussen poll. (A miniscule 4 percent of your fellow Pollyanna patriots trust Uncle Sam all of the time.) Meanwhile, technological advances have allowed U.S. intelligence agencies a greater degree of potential (and, as Snowden revealed, actual) surveillance than every before. This gap in trust and power simply cannot continue indefinitely.

While not without their problems, the Church and Pike committees are noteworthy milestones in reclaiming congressional accountability over executive intelligence agencies run amok. Creating a new committee to comprehensively assess current surveillance agency activities, warts and all, and recommend accountability measures to address the unknown excesses that likely lurk in the shadows is one step in the right direction toward taming back the tentacles of unlawful government surveillance.

But if there’s one thing we’ve learned from the fruits of the 1970’s committees—namely, the Foreign Foreign Intelligence Surveillance Act (FISA) of 1978—it’s that what once served as a hindrance to government abuses may one day become a party to it. For example, the Foreign Intelligence Surveillance Court (FISC) established by FISA that was intended to provide critical oversight of federal spying programs is today limited by the inadequate tools available to verify whether or not surveillance programs are lawful.

Imposing accountability on agencies whose missions are devoted to secrecy is a tough nut to crack. Our history struggling with this challenge suggests that these proposed reforms are good preliminary actions. But watching the watchers will continue to be an omnipresent duty.

]]>
https://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/feed/ 1 75085
Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600 https://techliberation.com/2014/05/30/mark-t-williams-predicted-bitcoins-price-would-be-under-10-by-now-its-over-600/ https://techliberation.com/2014/05/30/mark-t-williams-predicted-bitcoins-price-would-be-under-10-by-now-its-over-600/#comments Fri, 30 May 2014 14:43:41 +0000 http://techliberation.com/?p=74581

In April I had the opportunity to testify before the House Small Business Committee on the costs and benefits of small business use of Bitcoin. It was a lively hearing, especially thanks to fellow witness Mark T. Williams, a professor of finance at Boston University. To say he was skeptical of Bitcoin would be an understatement.

Whenever people make the case that Bitcoin will inevitably collapse, I ask them to define collapse and name a date by which it will happen. I sometimes even offer to make a bet. As Alex Tabarrok has explained, bets are a tax on bullshit.

So one thing I really appreciate about Prof. Williams is that unlike any other critic, he has been willing to make a clear prediction about how soon he thought Bitcoin would implode. On December 10, he told Tim Lee in an interview that he expected Bitcoin’s price to fall to under $10 in the first half of 2014. A week later, on December 17, he clearly reiterated his prediction in an op-ed for Business Insider:

I predict that Bitcoin will trade for under $10 a share by the first half of 2014, single digit pricing reflecting its option value as a pure commodity play.

Well, you know where this is going. We’re now five months into the year. How is Bitcoin doing?

coindesk-bpi-chart

It’s in the middle of a rally, with the price crossing $600 for the first time in a couple of months. Yesterday Dish Networks announced it would begin accepting Bitcoin payments from customers, making it the largest company yet to do so.

None of this is to say that Bitcoin’s future is assured. It is a new and still experimental technology. But I think we can put to bed the idea that it will implode in the short term because it’s not like any currency or exchange system that came before, which was essentially William’s argument.

]]>
https://techliberation.com/2014/05/30/mark-t-williams-predicted-bitcoins-price-would-be-under-10-by-now-its-over-600/feed/ 1 74581
What Vox Doesn’t Get About the “Battle for the Future of the Internet” https://techliberation.com/2014/05/02/what-vox-doesnt-get-about-the-battle-for-the-future-of-the-internet/ https://techliberation.com/2014/05/02/what-vox-doesnt-get-about-the-battle-for-the-future-of-the-internet/#comments Fri, 02 May 2014 18:56:31 +0000 http://techliberation.com/?p=74487

My friend Tim Lee has an article at Vox that argues that interconnection is the new frontier on which the battle for the future of the Internet is being waged. I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless.

How the Internet used to work

The Internet is a network of networks. Your ISP is a network. It connects to the other ISPs and exchanges traffic with them. Since connections between ISPs are about equally valuable to each other, this often happens through “settlement-free peering,” in which networks exchange traffic on an unpriced basis. The arrangement is equally valuable to both partners.

Not every ISP connects directly to every other ISP. For example, a local ISP in California probably doesn’t connect directly to a local ISP in New York. If you’re an ISP that wants to be sure your customer can reach every other network on the Internet, you have to purchase “transit” services from a bigger or more specialized ISP. This would allow ISPs to transmit data along what used to be called “the backbone” of the Internet. Transit providers that exchange roughly equally valued traffic with other networks themselves have settlement-free peering arrangements with those networks.

How the Internet works now

A few things have changed in the last several years. One major change is that most major ISPs have very large, geographically-dispersed networks. For example, Comcast serves customers in 40 states, and other networks can peer with them in 18 different locations across the US. These 18 locations are connected to each other through very fast cables that Comcast owns. In other words, Comcast is not just a residential ISP anymore. They are part of what used to be called “the backbone,” although it no longer makes sense to call it that since there are so many big pipes that cross the country and so much traffic is transmitted directly through ISP interconnection.

Another thing that has changed is that content providers are increasingly delivering a lot of a) traffic-intensive and b) time-sensitive content across the Internet. This has created the incentive to use what are known as content-delivery networks (CDNs). CDNs are specialized ISPs that locate servers right on the edge of all terminating ISPs’ networks. There are a lot of CDNs—here is one list.

By locating on the edge of each consumer ISP, CDNs are able to deliver content to end users with very low latency and at very fast speeds. For this service, they charge money to their customers. However, they also have to pay consumer ISPs for access to their networks, because the traffic flow is all going in one direction and otherwise CDNs would be making money by using up resources on the consumer ISP’s network.

CDNs’ payments to consumer ISPs are also a matter of equity between the ISP’s customers. Let’s suppose that Vox hires Amazon CloudFront to serve traffic to Comcast customers (they do). If the 50 percent of Comcast customers who wanted to read Vox suddenly started using up so many network resources that Comcast and CloudFront needed to upgrade their connection, who should pay for the upgrade? The naïve answer is to say that Comcast should, because that is what customers are paying them for. But the efficient answer is that the 50 percent who want to access Vox should pay for it, and the 50 percent who don’t want to access it shouldn’t. By Comcast charging CloudFront to access the Comcast network, and CloudFront passing along those costs to Vox, and Vox passing along those costs to customers in the form of advertising, the resource costs of using the network are being paid by those who are using them and not by those who aren’t.

What happened with the Netflix/Comcast dust-up?

Netflix used multiple CDNs to serve its content to subscribers. For example, it used a CDN provided by Cogent to serve content to Comcast customers. Cogent ran out of capacity and refused to upgrade its link to Comcast. As a result, some of Comcast’s customers experienced a decline in quality of Netflix streaming. However, Comcast customers who accessed Netflix with an Apple TV, which is served by CDNs from Level 3 and Limelight, never had any problems. Cogent has had peering disputes in the past with many other networks.

To solve the congestion problem, Netflix and Comcast negotiated a direct interconnection. Instead of Netflix paying Cogent and Cogent paying Comcast, Netflix is now paying Comcast directly. They signed a multi-year deal that is reported to  reduce Netflix’s costs relative to what they would have paid through Cogent. Essentially, Netflix is vertically integrating into the CDN business. This makes sense. High-quality CDN service is essential to Netflix’s business; they can’t afford to experience the kind of incident that Cogent caused with Comcast. When a service is strategically important to your business, it’s often a good idea to vertically integrate.

It should be noted that what Comcast and Netflix negotiated was  not a “fast lane”—Comcast is prohibited from offering prioritized traffic as a condition of its merger with NBC/Universal.

What about Comcast’s market power?

I think that one of Tim’s hangups is that Comcast has a lot of local market power. There are lots of barriers to creating a competing local ISP in Comcast’s territories. Doesn’t this mean that Comcast will abuse its market power and try to gouge CDNs?

Let’s suppose that Comcast is a pure monopolist in a two-sided market. It’s already extracting the maximum amount of rent that it can on the consumer side. Now it turns to the upstream market and tries to extract rent. The problem with this is that it can only extract rents from upstream content producers insofar as it lowers the value of the rent it can collect from consumers. If customers have to pay higher Netflix bills, then they will be less willing to pay Comcast. The fact that the market is two-sided does not significantly increase the amount of monopoly rent that Comcast can collect.

Interconnection fees that are being paid to Comcast (and virtually all other major ISPs) have virtually nothing to do with Comcast’s market power and everything to do with the fact that the Internet has changed, both in structure and content. This is simply how the Internet works. I use CloudFront, the same CDN that Vox uses, to serve even a small site like my Bitcoin Volatility Index. CloudFront negotiates payments to Comcast and other ISPs on my and Vox’s behalf. There is nothing unseemly about Netflix making similar payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).

For more reading material on the Netflix/Comcast arrangement, I recommend Dan Rayburn’s posts here, here, and here. Interconnection is a very technical subject, and someone with very specialized expertise like Dan is invaluable in understanding this issue.

]]>
https://techliberation.com/2014/05/02/what-vox-doesnt-get-about-the-battle-for-the-future-of-the-internet/feed/ 1 74487
Why would anyone use Bitcoin when PayPal or Visa work perfectly well? https://techliberation.com/2013/12/04/why-would-anyone-use-bitcoin-when-paypal-or-visa-work-perfectly-well/ https://techliberation.com/2013/12/04/why-would-anyone-use-bitcoin-when-paypal-or-visa-work-perfectly-well/#comments Wed, 04 Dec 2013 21:54:19 +0000 http://techliberation.com/?p=73929

bitcoin transaction

A common question among smart Bitcoin skeptics is, “Why would one use Bitcoin when you can use dollars or euros, which are more common and more widely accepted?” It’s a fair question, and one I’ve tried to answer by pointing out that if Bitcoin were just a currency (except new and untested), then yes, there would be little reason why one should prefer it to dollars. The fact, however, is that Bitcoin is more than money, as I recently explained in Reason. Bitcoin is better thought of as a payments system, or as a distributed ledger, that (for technical reasons) happens to use a new currency called the bitcoin as the unit of account. As Tim Lee has pointed out, Bitcoin is therefore a platform for innovation, and it is this potential that makes it so valuable.

Eric Posner is one of these smart skeptics. Writing in Slate in April he rejected Bitcoin as a “fantasy” because he felt it didn’t make sense as a currency. Since then it’s been pointed out to him that Bitcoin is more than a currency, and today at the New Republic he asks the question, “Why would you use Bitcoin when you can use PayPal or Visa, which are more common and widely accepted?”

He answers his own question, in part, by acknowledging that Bitcoin is censorship-resistant. As he puts it, “If you live in a country with capital controls, you can avoid those[.]” So right there, it seems to me, is one good reason why one might want to use Bitcoin instead of PayPal or Visa. Another smart skeptic, Tyler Cowen, acknowledges this as well, even if only to suggest that the price of bitcoins will fall “if/when China fully liberalizes capital flows[.]”

Another reason why one would use Bitcoin instead of PayPal or Visa is that it’s cheaper. Posner disputes this, arguing that Bitcoin’s historic volatility makes it risky to hold Bitcoins, necessitating hedging, and therefore making it no less costly than traditional payments systems. (Cowen was one of the first to make this argument.) But this is not true.

First of all, I would argue that there’s nothing inherent in Bitcoin that makes it necessarily as volatile as it has been; its volatility to date comes largely from the fact that it’s thinly traded. If its adoption continues apace, and its infrastructure continues to be developed, there’s no reason to think it will forever be as volatile as it has been to date. But that’s conjecture. More to the point is the proof that’s in the pudding: There are tens of thousands of merchants accepting bitcoins for payment today (and growing), and the number of transactions accepted by those merchants has been exploding as well, setting a record on Black Friday. Can it be that even with the necessary hedging, Bitcoin is cheaper?

At least for some types of transactions I think the answer is unquestionably yes. Take international remittances, which is a $500 billion industry. Sending money to Kenya using Western Union MoneyGram or some other traditional money transmitter costs around five to ten percent of the amount being sent, and can take days for the deposit to take place. A new startup, BitPesa, is looking to charge only three percent, and to carry out transfers virtually instantaneously. So hedging costs would have to be more than five to ten percent to make this not worthwhile. It’s an empirical question, but it seems to me the fact that so many are jumping in helps give us a hint as to the answer. Perhaps we can look to Bitpay’s 1% fee as a market estimate of the cost of hedging.

Well then, so far I count two things that Bitcoin can do that traditional payments systems cannot: it is censorship resistant and it is cheaper. Oh, wait. I actually mentioned another one: it’s faster. Traditional wire transfers can take days or even weeks to clear, while Bitcoin takes minutes. And yet there’s more.

As Eli Dourado just pointed out in a previous post, built into Bitcoin is a facility for decentralized arbitration. Essentially, Bitcoin allows for transactions that require two out of three signatures to verify a transaction, thus allowing payer and payee to turn to an arbitrator if there is a dispute about whether the payment should go through. Paypal and credit card companies essentially provide this service today, but as Eli points out, decentralized arbitration would likely be cheaper and would certainly enjoy much more competition. That’s four things Bitcoin can do that traditional payments networks cannot, but let me quickly add a fifth. There’s no reason that the arbitrator must be a human; using Bitcoin’s scripting language the arbitrator can be a trusted automated source of information that on a regular basis broadcasts facts such as the price of gold, or price of stocks, or sports scores. Make that data stream your arbitrator and, voila, you have a decentralized predictions market. (Ed Felten at Princeton is working on executing the concept.)

One more before I sign off and go drink with the rest of the Tech Liberation gang at our 15th Alcohol Liberation Front this evening, to which you’re all invited. Bitcoin allows for microtransactions in a way that’s never before been possible. First of all, because bitcoin transactions can be cheap, you can send incredibly small amounts (say five cents or half a cent) that would be cost-prohibitive using traditional payments systems. There’s a start-up called BitWall that essentially allows publishers to easily charge tiny amounts for their content. Now, believe me, I know all the arguments for and against micropayments for content. My only point is that Bitcoin has the potential to further reduce the friction of such payments. But that’s not the exciting part. More interesting are really, really small microtransactions.

Bitcoin transactions are cheap, but you wouldn’t think they’re cheap enough that you could conduct hundreds per second. But the thing is, you can, using the micropayments channels feature of the Bitcoin protocol. It’s not yet been widely exploited, but it’s there in the spec waiting to be. I won’t go into the technical details in this post, but essentially you transmit one large transaction to the network (you can think of this like a deposit, say of $10), then you conduct as many tiny transactions between payer and payee not broadcast to the network (therefore ‘free’), and finally you broadcast how much of the initial amount remains with each party. What this means is that you can now offer metered services based on microtransactions.

One good example of how this would be useful is Wi-Fi access, which Mike Hearn explains in this video. Today we are surrounded by wi-fi hotspots, but we can’t use them because they are password protected, in part because there’s no good way to charge for their use. When you can pay to use a wi-fi hotspot, it usually entails creating an account with the provider and then purchasing a block of time, perhaps more than you need. Now imagine if you could connect to any open hotspot, without first creating any kind of account, and paying your way by the second or the kilobyte. That’s possible today with Bitcoin, it’s just going to take some time to be implemented. And think of all the other as-yet unimagined ways that this ability to meter could be put to use!

That’s six ways to answer the question, “Why would you use Bitcoin when you can use PayPal or Visa.” There are more. Hearn discusses a bunch in the video. These are all very real in the sense that they are all technically possible today, but certainly speculative in that there remain regulatory and market hurdles ahead. I can certainly understand why some would be skeptical of Bitcoin’s long-term success (I for one am not certain of it), but I really hope we can get to the point were that skepticism is based on more than misunderstandings about what Bitcoin is or what it can and cannot do.

]]>
https://techliberation.com/2013/12/04/why-would-anyone-use-bitcoin-when-paypal-or-visa-work-perfectly-well/feed/ 4 73929
Stop Saying Bitcoin Transactions Aren’t Reversible https://techliberation.com/2013/12/04/bitcoin-arbitration/ https://techliberation.com/2013/12/04/bitcoin-arbitration/#respond Wed, 04 Dec 2013 19:31:35 +0000 http://techliberation.com/?p=73917

One of the criticisms leveled at Bitcoin by those people determined to hate it is that Bitcoin transactions are irreversible. If I buy goods from an anonymous counterparty online, what’s to stop them from taking my bitcoins and simply not sending me the goods? When I buy goods online using Visa or American Express, if the goods never arrive, or if they aren’t what was advertised, I can complain to the credit card company. The company will do a cursory investigation, and if they find that I was indeed likely ripped off, they will refund me my money. Credit card transactions are reversible, Bitcoin transactions are not. For this service (among others), credit card companies charge merchants a few percentage points on the transaction.

The problem with this account is that it’s not true: Baked into the Bitcoin protocol, there is support for what are known as “m-of-n” or “multisignature” transactions, transactions that require some number  m out of some higher number n parties to sign off.

The simplest variant is a 2-of-3 transaction. Let’s say that I want to buy goods online from an anonymous counterparty. I transfer money to an address jointly controlled by me, the counterparty, and a third-party arbitrator (maybe even Amex). If I get the goods, they are acceptable, and I am honest, I sign the money away to the seller. The seller also signs, and since 2 out of 3 of us have signed, he receives his money. If there is a problem with the goods or if I am dishonest, I sign the bitcoins back to myself and appeal to the arbitrator. The arbitrator, like a credit card company, will do an investigation, make a ruling, and either agree to transfer the funds back to me or to the merchant; again, 2 of 3 parties must agree to transfer the funds.

This is  not an escrow service; at no point can the arbitrator abscond with the funds. The arbitrator is paid a market rate in advance for his services, which are offered according to terms agreed upon by all three parties. This is better than the equivalent service using credit cards, because credit cards rely on huge network effects and consequently there are only a handful of suppliers of such transaction arbitration. Using Bitcoin, anyone can be an abitrator, including the traditional credit card companies (although they might have to lower their fees). Competition in both terms and fees is likely to result in better discovery of efficient rules for dispute resolution.

While multisignature transactions are not well understood, they are right there in the Bitcoin protocol, as much a valid Bitcoin transaction as any other. So  some Bitcoin transactions are irreversible; others are reversible, exactly as reversible as credit card transactions are.

Bitrated.com is a new site (announced yesterday on Hacker News) that facilitates setting up multisignature transactions. Bitcoin client support for multisignature transactions is limited, so the site helps create addresses that conform to the m-of-n specifications. At no point does the site have access to the funds in the multisignature address.

In addition, Bitrated provides a marketplace where people can advertise their arbitration services. Users are able to set up transactions using arbitrators both from the site or from anywhere else. The entire project is open source, so if you want to set up a competing directory, go for it.

What excites me most about the decentralized arbitration afforded by multisignature transactions is that it could be the beginnings of a Common Law for the Internet. The plain, ordinary Common Law developed as the result of competing courts that issued opinions basically as advertisements of how fair and impartial they were. We could see something similar with Bitcoin arbitration. If arbitrators sign their transactions with links to and a cryptographic hash of a PDF that explains why they ruled as they did, we could see real competition in the articulation of rules. Over time, some of these articulations could come to be widely accepted and form a body of Bitcoin precedent. I look forward to reading the subsequent Restatements.

Multisignature transactions are just one of the many innovations buried deep in the Bitcoin protocol that have yet to be widely utilized. As the community matures and makes full use of the protocol, it will become more clear that Bitcoin is not just a currency but a platform for financial innovation.

Originally posted at elidourado.com.

]]>
https://techliberation.com/2013/12/04/bitcoin-arbitration/feed/ 0 73917
What PiracyData.org Really Says About Copyright: It’s Not Hollywood’s Fault https://techliberation.com/2013/10/29/what-piracydata-org-really-says-about-copyright-its-not-hollywoods-fault/ https://techliberation.com/2013/10/29/what-piracydata-org-really-says-about-copyright-its-not-hollywoods-fault/#comments Wed, 30 Oct 2013 01:48:02 +0000 http://techliberation.com/?p=73704

Two weeks ago, with much fanfare, PiracyData.org went live. Created by co-liberators Jerry Brito and Eli Dourado, along with Matt Sherman, the website tracks TorrentFreak’s list of which movies are most pirated each week, and indicates whether and how consumers may legally watch these movies online. The site’s goal, Brito explains, is to “shed light on the relationship between piracy and viewing options.” Tim Lee has more details over on The Switch.

Assuming the site’s data are accurate—which it appears to be, despite some launch hiccups—PiracyData.org offers an interesting snapshot of the market for movies on the Internet. To date, the data suggest that a sizeable percentage of the most-pirated movies cannot be purchased, rented, or streamed from any legitimate Internet source. Given that most major movies are legally available online, why do the few films that aren’t online attract so many pirates? And why hasn’t Hollywood responded to rampant piracy by promptly making hit new releases available online?

Is Hollywood leaving money on the table?

To many commentators, PiracyData.org is yet another nail in Hollywood’s coffin. Mike Masnick, writing on Techdirt, argues that “the data continues to be fairly overwhelming that the ‘piracy problem’ is a problem of Hollywood’s own making.” The solution? Hollywood should focus on “making more content more widely available in more convenient ways and prices” instead of “just point[ing] the blame finger,” Masnick concludes. Echoing this sentiment, CCIA’s Ali Sternburg points out on DisCo that “[o]ne of the best options for customers is online streaming, and yet piracydata.org shows that none of the most pirated films are available to be consumed in that format.”

But the argument that Hollywood could reap greater profits and discourage piracy simply by making its content more available has serious flaws. For one thing, as Ryan Chittum argues in the Columbia Journalism Review, “the movies in the top-10 most-pirated list are relatively recent releases.” Thus, he observes, these movies are “in higher demand—including from thieves—than back-catalog films.” If PiracyData.org tracked release dates, each film’s recency of release might well turn out to be more closely correlated with piracy than availability of legitimate viewing options.

In fairness to Masnick and Sternburg, Hollywood probably could make a dent in piracy if it put every new movie on iTunes, Vudu, Google Play, Amazon, and Netflix the day of release. Were these lawful options available from the get-go, they’d likely attract some people who would otherwise pirate a hit new film by grabbing a torrent on The Pirate Bay. Those who pirate movies may be law-breaking misers, but they still weigh tradeoffs and respond to incentives like any other consumer. Concepts like legality may not matter to pirates, but they still care about price, quality, and convenience. This is why you won’t see a video that’s freely available in high-definition on YouTube break a Bittorrent record anytime soon.

But even if Hollywood could better compete with piracy by vastly expanding online options for viewing new release films, this might not be a sound money-making strategy. Each major film studio is owned by a publicly-held corporation that operates for the benefit of its shareholders. In other words, the studios are in the business of earning profits, not maximizing their audiences. For every two people who stream a movie on Netflix instead of pirating it, another person may watch the same Netflix stream in lieu of renting the movie for $2.99 on Google Play, resulting in a net loss for the copyright holder. Deciding how to distribute an information good such as a hit new movie—and how much to charge for it—poses an extremely complex business challenge, especially in an ever-changing economic and technological environment, as Carl Shapiro and Hal Varian explain in their classic 1998 book, Information Rules: A Strategic Guide to the Network Economy.

Why are release windows still around?

Instead of giving Hollywood the benefit of the doubt by assuming the studios are acting rationally, let’s consider whether the industry has a plausible case for releasing movies online only after they’ve been in theaters for months. By way of background, in case you’ve lived under a rock for the past half-century, major studio movie releases typically don’t appear online or in video stores until 90 to 120 days after theatrical release, depending on the movie and distribution avenue. Preserving the box office exclusivity of newly released films encourages moviegoers to head to their local theater—where ticket prices hover around $8—instead of renting the movie on Blu-ray or online at a lower price.

This strategy, a form of price discrimination known as “versioning,” aims to expand the market for movies by appealing to a broader array of consumers. When executed properly, versioning increases studio profits, and probably makes society better off as well. Consider the book market: publishers often release a book in hardcover format at first, then a year later, in paperback at a much lower price. Were both versions released simultaneously, fewer hardcovers would sell, so the publisher would likely charge more for paperbacks. In the end, fewer consumers could afford to purchase and enjoy certain books.

Of course, staggering release dates isn’t the only way studios distinguish different versions of the same movie. In fact, the release window has a serious downside: it doesn’t accommodate people who, for various reasons, can’t easily get to a movie theater. For instance, nearly 11 million disabled Americans receive Social Security Disability Insurance benefits. Around 60 million reside in rural areas, some hundreds of miles from the nearest city. And 14 million U.S. households include one or more children under 6 years of age. For many of these people, traveling to a movie theater may be extremely difficult or costly. Simply getting out of the house for a couple hours may be burdensome, especially for parents with young kids at home. Or, like Mike Masnick, maybe you just hate the experience that movie theaters typically offer.

Fortunately, enjoying a film need not entail trekking to a movie theater. Thanks to television and broadband Internet, watching a movie at home is often as easy as pressing a few buttons on a remote control. This is great news for movie lovers who are disabled, have young kids at home, or live in remote areas. If you want to catch a brand new movie from the comfort of your own couch, however, you’re probably out of luck.

From Hollywood’s perspective, this presents a problem. If a family of four is willing to pay $30 to watch a brand new movie, whether they watch it at home or in a theater is inconsequential. Instead, Hollywood wants to capture as many high-value consumers as possible. The release window is merely an imperfect proxy for consumers’ marginal willingness to pay for movies. It also enables more flexible pricing—one ticket for every person—which is infeasible for digital rentals and purchases. But there’s nothing sacred about the release window. Back in the 1980s, “[t]he video window opened six months after the theatrical release and four months before the pay-per-view window,” according to Edward Jay Epstein, author of The Hollywood Economist. Later, as DVD sales began to rival and sometimes exceed box office receipts, the major studios gradually shortened their release windows, which are now often as short as three months.

Even the 90 day release window is showing signs of obsolescence. Recently, Hollywood has experimented on several occasions with letting consumers watch movies at home just weeks after their theatrical release, as I explained on these pages in 2008. In late 2011, for instance, Universal Studios sought to make Tower Heist available on-demand to 500,000 Comcast subscribers just 21 days after the film’s release. Despite the planned $60 rental fee, however, Universal called off the trial just weeks before Tower Heist‘s release after Cinemark, a major movie theater chain, threatened to boycott the film. If you’re still balking at the price tag—yes, a $60 rental is too expensive for all but the most profligate movie buffs—rest assured Hollywood isn’t giving up. As a recent article in The Convergence explains:

Two years ago, a number of major studios tested early release models based on varying windows. Disney rolled out an animated film, “Tangled,” in Portugal just a month and a half after its initial release. In the U.S., DirecTV served up several movies with a 60-day window for $30 a pop from Sony, Warner Bros., Universal Pictures and 20th Century Fox.

A few months ago, The Wall Street Journal reported that Sony offered Django Unchained in South Korea for online and cable rental three weeks after its April 2013 premier in Korean theaters. And Disney offered two of its animated films, Wreck-it Ralph and Brave, for online rental in South Korea just a few weeks after their releases in 2012. Also this summer, in a more modest trial, Canadian theater company Cineplex joined with Warner Bros. to sell ticket purchasers a digital copy of Pacific Rim for an extra $19.99 to $24.99 atop the ticket price.

Will these experiments become the new norm over the next few years? Quite possibly. Although the theater owners who distribute Hollywood films will surely fight tooth and nail to preserve the release window, whether the theaters prevail depends on consumers themselves. If enough consumers are willing to pay for the privilege of watching new release movies on-demand or online, the studios won’t hesitate to meet this demand—theater owners be damned. But if the studios offer hit new films online at too low a price, some people who would otherwise be willing to spend $8 to see a movie in theaters will instead view it online for much less. This tradeoff, perhaps more than any other factor, explains the release window’s persistence.

If spending $20 or $30 on a digital rental of a new movie sounds unappealing, you’re not alone. Lots of people watch movies using free or inexpensive services such as broadcast television, Netflix, or Amazon Instant Video. There’s nothing wrong with that. But calling on Hollywood to offer popular new films on inexpensive streaming services is asking the studios to take a major pay cut for no good reason. Even if every last U.S. household signed up for Netflix, at $7.99 per month, the company would only bring in $11b annually. That’s a lot of cash, but consider that in 2012, the big six film studios generated $21.8b in revenues, according to Variety.

The upshot is that while cutting prices can be a great business strategy, especially in information markets where the marginal cost is near-zero, there’s little evidence that Hollywood is systematically over-pricing its movies. Some niche markets may be underserved—I for one wish VUDU’s HDX rentals offered 5.1 audio to PC users, in addition to Xbox 360 and PS3 owners—but all in all, Hollywood has made a ton of progress in the past few years when it comes to offering movies on all sorts of devices at a variety of price points.

Do release windows help combat piracy?

Above, I discussed how shorter release windows can help curb piracy, even if the net result is consumers spending less on movies. But giving consumers more movie options is a double-edged sword. Perhaps the big studios fear that if they begin releasing films online and in theaters simultaneously, the piracy problem may get worse, not better.

Are such concerns legitimate, or are Hollywood bigwigs making a mountain out of a molehill?

To answer this question, a visit to The Pirate Bay is in order. No, I won’t advise you to download a copyrighted movie. Rather, I’ll compare the search results for the last two movies I saw in theaters, both of which made the latest most-pirated list on PiracyData.org. First up is Pacific Rim, currently listed as the fourth most-pirated movie of the week. Released on July 12, 2013, Pacific Rim is available for digital rental and digital purchase, along with Blu-ray and DVD. The other film is Elysium, the sixth most-pirated movie of the week. Released on August 9, 2013, Elysium isn’t available from any authorized online outlets or on disc.

Searching The Pirate Bay for these two films reveals strikingly different results. See for yourself. Here are the top results for Pacific Rim, sorted by number of leeches:

pacificrimAnd here are the top results for Elysium:

elysiumSee the difference? Several Blu-ray rips of Pacific Rim are available, including several 1080p files and even a 12.9 GB 3D file (good luck getting it to play!). Note also that all of the top Pacific Rim torrents were uploaded during the past few weeks, even though the film came out back in July.

Elysium, on the other hand, is only available as an “HDRip,” “CAM,” “Screener,” “Webrip,” or “Telesync.” According to comments on The Pirate Bay, these files—many of which were uploaded in September or earlier—offer poor quality audio and video, while some files include “hardcoded subs” in a foreign language.

Why the discrepancy between Pacific Rim and Elysium? Because the former is available online and on disc, while the latter has yet to be released in either format. Acquiring a digital copy of a movie, it turns out, is much more cumbersome if the film is still only in theaters. For brand new movies, pirates typically distribute so-called “cams”—whereby a person simply records a movie in a theater with a handheld or tripod-mounted video camera—or “telesyncs,” which are similar to cams but include a direct connection to the sound source. Also sometimes uploaded are “screeners,” a pre-release DVD of a film usually sent to critics and award voters.

But when a hit new movie is released online and on Blu-ray, pristine digital copies of the film soon begin appearing on The Pirate Bay and similar outlets. Despite studios’ ongoing efforts to protect access to their works using digital rights management (DRM), nearly every major movie format has been cracked in short order. Blu-ray’s BD+ DRM, for instance, was cracked not long after its 2007 release by the Antiguan company SlySoft. To this day, newly released Hollywood films invariably end up online almost immediately after their release window ends—if not a few days sooner. Videophiles can even download untouched 30GB MPEG-2 Transport Stream files that are bit-for-bit identical to the Blu-ray version of the film.

If we assume pirates are more or less rational, it follows that they care about video and audio quality, much like their law-abiding counterparts. The easier it is for pirates to acquire a high-quality digital file of a new movie using Bittorrent, therefore, the less likely pirates are to bite the bullet and buy a ticket at the box office for a much awaited film. Although some pirates may be unwilling to pay to see a movie in a theater at any price, other pirates are surely more flexible when it comes to deciding how to enjoy a new movie. For consumers of the latter type, waiting three or four months to enjoy a new movie in a high-quality format may be unbearable for certain films.

So long as new DRM technologies are cracked soon after their release, many copyright owners will think twice before making high-value content available in a digital format. Unfortunately for law-abiding consumers, this understandable aversion to digital distribution results in fewer lawful venues for enjoying content online. Despite Congress’s efforts to bar the circumvention of technological measures that protect copyrighted works, DRM remains fallible—and the release window persists.

Don’t blame the victim-blamer

Some critics have accused PiracyData.org’s creators of “blaming the victim.” For instance, Larry Spiwak of the Phoenix Center writes that the site’s “mentality is nothing new to the digital piracy debate and it entirely misses the point: intellectual property is property” (his emphasis). Jeff Eisenach makes a similar point over on AEI’s TechPolicyDaily.com.

One need not loathe copyrights, however, to care about how lawful markets for expressive works affect unlawful markets for such works. I’m strongly against assault and robbery—and support the criminalization of both acts—but I still care to know how to avoid being mugged or gunned down. If a journalist pens an article about neighborhoods to shun late at night if you’re alone and unarmed, it doesn’t mean she is pro-robbery or pro-violence. The law isn’t our only fount of protection from would-be aggressors. If we pretend otherwise by ignoring self-help options—or, worse, by trying to craft a government that can safeguard us against all ills—we’re kidding ourselves and screwing our posterity.

To be sure, I think copyright infringement is a problem. Brito agrees with me. And, as I’ve argued here, here, here, and here, federal law probably should do more to address piracy. But this proposition is not self-evident, however much its advocates wish it were so. Indeed, if Hollywood could double its profits while cutting piracy in half by revising its business model over the coming year, that’s a extremely good reason for Congress to hold off on anti-piracy legislation. If, however, Hollywood is doing its damnedest to curb infringement while preserving economic returns that are commensurate to the risks entailed in movie production, then it’s up to lawmakers to determine whether and how to better secure copyrights in a manner that serves human flourishing. Insofar as PiracyData.org can help inform this debate, may a thousand similar websites bloom.

]]>
https://techliberation.com/2013/10/29/what-piracydata-org-really-says-about-copyright-its-not-hollywoods-fault/feed/ 5 73704