-Coauthored with Mercatus MA Fellow Walter Stover

Imagine visiting Amazon’s website to buy a Kindle. The product description shows a price of $120. You purchase it, only for a co-worker to tell you he bought the same device for just $100. What happened? Amazon’s algorithm predicted that you would be more willing to pay for the same device. Amazon and other companies before it, such as Orbitz, have experimented with dynamic pricing models that feed personal data collected on users to machine learning algorithms to try and predict how much different individuals are willing to pay. Instead of a fixed price point, now users could see different prices according to the profile that the company has built up of them. This has led the U.S. Federal Trade Commission, among other researchers, to explore fears that AI, in combination with big datasets, will harm consumer welfare through company manipulation of consumers to increase their profits.

The promise of personalized shopping and the threat of consumer exploitation, however, first supposes that AI will be able to predict our future preferences. By gathering data on our past purchases, our almost-purchases, our search histories, and more, some fear that advanced AI will build a detailed profile that it can then use to estimate our future preference for a certain good under particular circumstances. This will escalate until companies are able to anticipate our preferences, and pressure us at exactly the right moments to ‘persuade’ us into buying something we ordinarily would not.

Such a scenario cannot come to pass. No matter how much data companies can gather from individuals, and no matter how sophisticated AI becomes, the data to predict our future choices do not exist in a complete or capturable way. Treating consumer preferences as discoverable through enough sophisticated search technology ignores a critical distinction between information and knowledge. Information is objective, searchable, and gatherable. When we talk about ‘data’, we are usually referring to information: particular observations of specific actions, conditions or choices that we can see in the world. An individual’s salary, geographic location, and purchases are data with an objective, concrete existence that a company can gather and include in their algorithms.

Continue reading →

Catchy headlines like “Heavy Social Media Use Linked With Mental Health Issues In Teens” and “Have Smartphones Destroyed a Generation?” advance a common trope of generational decline. But a new paper in Nature uses a new and rigorous analytical method to understand the relationship between adolescent well-being and digital technology, finding a “negative but small [link], explaining at most 0.4% of the variation in well-being.” Continue reading →

Air taxis and electric vertical takeoff and landing aircraft (eVTOLs) will receive significant regulator attention in 2019 as companies test these aircraft and move towards commercialization. I’m fairly bullish on the technology and its potential and I’m pleased to see state lawmakers and mayors, however, seem to be waking up to the massive possibilities of this industry.

A recent NASA-commissioned study estimates that in the best-case scenario, the U.S. air taxi market would be worth about $500 billion annually, which is nearly the size of the U.S. auto sector. This translates into about 1 million air taxis in the air and 11 million flights per day. Morgan Stanley researchers recently estimated that the global flying car market could be about $1.5 trillion annually by 2040.

You can quibble with the numbers, but it’s clear that aircraft companies and governments believe flying cars are no longer science fiction. Uber plans to offer commercial eVTOL flights in 2023, with testing beginning in 2020. Boeing plans testing later this year.

Federal and state lawmakers need to start preparing for the industry. In November, I published a paper and a Wall Street Journal op-ed proposing that the FAA demarcate and auction highways in the sky–exclusive aerial corridors–for air taxi flights, as a way to manage airspace congestion and preserve competition.

As I wrote in the Detroit News a few weeks ago, state lawmakers also need to start planning for air taxis. States don’t manage aircraft flights but they do manage zoning, property rights, and other areas where state policy can inhibit or encourage the air taxi industry. I mentioned in the op-ed that there are two things states can do in the near future.

Aerial Navigational Easement

First, a good policy is to grant small aircraft a navigational easement to low-altitude airspace. Trespass lawsuits from landowners could scare away companies and innovators who want to test passenger drone and air taxi flights.

About half of states created these aerial navigation easements in the 1920s and 1930s so that trespass lawsuits would not interfere with the new aviation industry. Per these state statutes, flights over property are allowed so long as they do not substantially interfere with the homeowner’s use and enjoyment of the land.

Aerial navigation easement laws have a few benefits: They:

  1. Reaffirm the primacy of landowner property interests.
  2. Reinforce state prerogatives to determine property rights.
  3. Encourage the drone and air taxi industry by precluding most trespass lawsuits.
  4. Avoid a fight with federal regulators by leaving air traffic management policy untouched.

This 80-year old policy will see new relevance in the states this year. Last month, in Washington, a landowner sued a drone operator for aerial trespass. Washington, notably, does not provide for an aerial navigational easement in law.

Air Taxi Advisory Committee

Second, governors or legislatures should consider creating advisory committees for the air taxi industry. Air taxis will raise all sorts of novel state and local issues. A few come to mind:

  • Should municipal zoning laws for helipads and air taxi “vertiports” be liberalized?
  • EVTOLs require substantial electrical grid improvements and distributed, powerful charging stations on rooftops and landing sites. Are state regulations standing in the way?
  • Air taxis, like trains and autos, create significant noise and local nuisance laws could essentially preclude all air taxi testing and operation. What decibel levels are appropriate to balance industry and public acceptance? Should that be decided at the state or local level?

State advisory committees were created for another emerging technology sector–autonomous vehicles. Committees are composed of stakeholders, including public safety representatives, consumer groups, industry representatives, and academics. They can create policy recommendations for legislators and participate in hearings as air taxis come closer to commercialization.

For the air taxi industry to reach its potential, there needs to be collaboration between and foresight from state and federal lawmakers. Air taxi technology has moved far ahead of law, regulation, and public perception. Fortunately, I expect state and local officials to start examining their current laws and whether modernization is in order to stimulate this transportation sector.

Policy incentives matter and have a profound affect on the innovative capacity of a nation. If policymakers erect more obstacles to innovation, it will encourage entrepreneurs to look elsewhere when considering the most hospitable place to undertake their innovative activities. This is “global innovation arbitrage,” a topic we’ve discussed many times here in the past. I’ve defined it as, “the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” We see innovation arbitrage happening in high-tech fields as far-ranging as drones, driverless cars, and genetics,among others.

US policymakers might want to consider this danger before the nation loses its competitive advantage in various high-tech fields. Today’s most pressing example arrives in the form of potentially burdensome new export control regulations. In late 2018, the US Department of Commerce’s Bureau of Industry and Security announced a “Review of Controls for Certain Emerging Technologies,” which launched an inquiry about whether to greatly expand the list of technologies that would be subjected to America’s complex export control regulations. Most of the long list of technologies under consideration (such as artificial intelligence, robotics, 3D printing, and advanced computing technologies) were “dual-use” in nature, meaning that they have many peaceful applications.

Continue reading →

Below are the top 10 posts on the Technology Liberation Front in 2018. Everything from privacy, to 5G, to tech monopolies, and net neutrality. Enjoy, and Happy New Year!

10. How Well-Intentioned Privacy Regulation Could Boost Market Power of Facebook & Google, April 25.

9. Nationalizing 5G networks? Why that’s a bad idea., January 29. (Republished at The Federalist.)

8. The Pacing Problem, the Collingridge Dilemma & Technological Determinism, August 16.

7. GDPR Compliance: The Price of Privacy Protections, July 9.

6. Evasive Entrepreneurialism and Technological Civil Disobedience: Basic Definitions, July 10.

5. No, “83% of Americans” do not support the 2015 net neutrality regulations, May 18.

4. The FCC can increase 5G deployment by empowering homeowners, July 26.

3. Doomed to fail: “net neutrality” state laws, February 20.

2. Should We Teach Children to Be Entrepreneurs, or How to Pay Licensing Fees?, Aug. 21.

1. The Week Facebook Became a Regulated Monopoly (and Achieved Its Greatest Victory in the Process), April 10.

It is now been a year since network neutrality rules supported by Title II were officially repealed, marking the end of the Obama-era legislation. Writing in Wired, Klint Finley noted that, “The good news is that the internet isn’t drastically different than it was before. But that’s also the bad news: The net wasn’t always so neutral to begin with.”

At the time, many worried what would happen. Apple co-founder Steve Wozniak and former FCC Commissioner Michael Copps suggested that two worlds were possible. “Will consumers and citizens control their online experiences, or will a few gigantic gatekeepers take this dynamic technology down the road of centralized control, toll booths and constantly rising prices for consumers?”
Continue reading →

One year ago, the FCC majority passed the 2017 Restoring Internet Freedom Order, largely overturning the 2015 Open Internet Order. I consider the 2017 Order the most significant FCC action in a generation. The FCC did a rare thing for an agency—it voluntarily narrowed its authority to regulate a powerful and massive industry.

In addition to returning authority to the Federal Trade Commission and state attorneys general, the 2017 Order restored common-sense regulatory humility, despite the courts blessing the Obama FCC’s unconvincing, expansive interpretation of FCC authority. National policy, codified in law, is that the Internet and Internet services should be “unfettered by Federal or State regulation,” which, if it means anything, means Internet services cannot be regulated as common carriers.

Net neutrality is dead

Net neutrality advocates who want the FCC to have common carriage powers over Internet applications and networking practices were outraged by the approval of the 2017 Order. Joe Kane at R Street has a good roundup of some of the death-of-the-Internet hyperbole from the political class and advocates. Some disturbed net neutrality supporters took it too far, including threats to the lives and families of the Republican commissioners, especially Chairman Pai.

But the 2017 Order hadn’t killed net neutrality. It was already dead. A few hours after the passage of the Restoring Internet Freedom Order, I was on a net neutrality panel in DC for an event about the First Amendment and the Internet. (One of my co-panelists dropped out out of caution because of the credible bomb threat at the FCC that day.) I pointed out at that event that while you wouldn’t know it from the news coverage, the Obama FCC had already killed net neutrality’s core principle—the prohibition against content blocking. The 2015 “net neutrality” Order allowed ISPs to block content. Attributing things to the 2015 Order that it simply doesn’t do is what Commissioner Carr has called the “Title II head fake.” The 2017 Order simply freed ISPs and app companies to invest and innovate without fear of plodding scrutiny and inconclusive findings from a far-off FCC bureau.

Long live net neutrality

The net neutrality movement will live on, however. The main net neutrality proponents aren’t that concerned with ISP content blocking; they want FCC regulation of the Internet companies and new media. It’s no coincidence that most of the prominent net neutrality advocates come out of the media access movement, which urged the FCC’s Fairness Doctrine, equal time laws, and programming mandates for TV and radio broadcasts.

The newer net neutrality coalition, as then-FCC Chairman Wheeler conceded frankly, doesn’t know precisely what Internet regulation would look like. What they do know is that ISPs and Internet companies are operating with inadequate public supervision and government design. 

As Public Knowledge CEO Gene Kimmelman has said, the 2015 Order was about threatening the industry with vague but severe rules: “Legal risk and some ambiguity around what practices will be deemed ‘unreasonably discriminatory’ have been effective tools to instill fear for the last 20 years” for the telecom industry. Title II functions, per Kimmelman, as a “way[] to keep the shadow and the fear of ‘going too far’ hanging over the dominant ISPs.” Internet regulation advocates, he said at the time, “have to have fight after fight over every claim of discrimination, of new service or not.”

So it’s Internet regulation, not strict net neutrality, that is driving the movement. As former Obama administration and FCC adviser Kevin Werbach said last year, “It’s not just broadband providers that are fundamental public utilities, at some level Google is, at some level Facebook is, at some level Amazon is.” 

Fortunately, because of the Restoring Internet Freedom Order, IP networks and apps companies have a few years of regulatory reprieve at a critical time. Net neutrality was invented in 2003 and draws on common carriage principles that cannot be applied sensibly to the various services carried on IP networks. Unlike the “single app” phone network regulated with common carriage, these networks transmit thousands of services and apps–like VoIP, gaming, conferencing, OTT video, IPTV, VoLTE, messaging, and Web–that require various technologies, changing topologies, and different quality-of-service requirements. 5G wireless will only accelerate the service differentiation that is at severe tension with net neutrality norms.

Rather than distract agency staff and the Internet industry with metaphysical debates about “reasonable network” practices, the Trump FCC has prioritized network investment, spectrum access, and rural broadband. Hopefully the next year is like the last.

Addendum: The net neutrality reprieve has not only freed up FCC staff to work on more pressing matters, it’s freed  up my time to write about tech policy areas that the public will benefit from. In November I published a Mercatus working paper and a Wall Street Journal op-ed about flying car policy.

Autonomous vehicles are quickly becoming a reality. Waymo just launched a driverless taxi service in Arizona. Part of GM’s cuts were based on a decision to refocus their efforts around autonomous vehicle technology. Tesla seems to repeatedly be promising more and more features that take us closer than ever to a self-driving future. Much of this progress has been supported by the light touch approach that has been taken by both state and federal regulators up to this point. This approach has allowed the technology to rapidly develop, and the potential impact of federal legislation that might detour this progress should be cautiously considered.

For over a year, the Senate has considered passing federal legislation for autonomous vehicle technology, the AV START Act, after similar legislation already passed the House of Representatives. This bill would clarify the appropriate roles for state and federal authorities and preempting some state actions when it comes to regulating autonomous vehicles and will hopefully end some of the patchwork problems that have emerged. While federal legislation regarding preemption may be necessary for autonomous vehicles to truly revolutionize transportation, other parts of the bill could create increased regulatory burdens that actually add speed bumps on the path of this life-saving innovation.

Continue reading →

Bots and Pirates

by on December 4, 2018 · 0 comments

A series of recent studies have shown the centrality of social media bots to the spread of “low credibility” information online. Automated amplification, the process by which bots help share each other’s content, allows these algorithmic manipulators to spread false information across social media in seconds by increasing visibility. These findings, combined with the already rising public perception of social media as harmful to democracy, are likely to motivate some Congressional action regarding social media practices. In a divided Congress, one thing that seems to be drawing more bipartisan support is an antagonism to Big Tech.

Regulating social media to stop misinformation would mistake the symptoms of an illness for its cause. Bots spreading low quality content online is not a cause for declining social trust, but a result of it. Actions that explicitly restrict access to this type of information would likely result in the opposite of their intended effect; allowing people to believe more radical conspiracies and claim that the truth is censored.

A parallel for the prevalence of bots spreading information today is the high rates of media piracy that lasted from the late-1990s through the mid-2000s, but experienced a significant decline throughout this past decade (many of the claims by anti-piracy advocates of consistently rising US piracy fail to acknowledge the rise in file sizes of high quality downloads and the expansion of internet access, as a relative total of content consumption it was historically declining). Content piracy and automated amplification by bots share a relationship through their fulfillment of consumer demand. Just as nobody would pirate videos if there were not some added value over legal video access, bots would not be able to generate legitimate engagement solely by gaming algorithms. There exists a gap in the market to serve consumers the type of content that they desire in a convenient, easy-to-access form.

This fulfilment of market demand is what changed consumer interest in piracy, and it is what is needed to change interest in “low credibility” content. In the early days of the MP3 file format the music industry strongly resisted changing their business models, which led to the proliferation of file sharing sites like Napster. While lawsuits may have shut down individual file sharing sites, they did not alter the demand for pirated content, and piracy persisted. The music industry’s begrudging adoption of iTunes began to change these incentives, but pirated music streaming persisted. It was with legal streaming services like Spotify that piracy began to decline as consumers began to receive what they asked for from legitimate sources: convenience and cheap access to content. It is important to note that pirating in the early days was not convenient, malware and slow download speeds made it a cumbersome affair, but given the laggard nature of media industry incumbents, consumers sought it out nonetheless.

The type of content considered “low credibility” today, similarly, is not convenient, as clickbait and horrible formatting intentionally make such sites painful to use in order to maximize advertising dollars extracted. The fact that consumers still seek these sites out regardless is a testament to the failure of the news industry to cater to consumer demands.

To reduce the efficacy of bots in sharing content, innovation is needed in content production or distribution to ensure convenience, low cost, and subjective user trust. This innovation may come from the social media side through experimentation with subscription services less dependent on advertising revenue. It may come from news media, either through changes in how they cater content to consumers, or through changes in reporting styles to increase engagement. It may even come through a social transformation in how news is consumed. Some thinkers believe that we are entering a reputation age, which would shift the burden of trust from a publication to individual reporters who curate our content. These changes, however, would be hampered by some of the proposed means to curtail bots on social media.

The most prominent proposals to regulate social media regards applying traditional publisher standards to online platforms through the repeal of Section 230 of the Communications Decency Act, which in turn would make platforms liable for the content users post. While this would certainly incentivize more aggressive action against online bots – as well as a wide amount of borderline content – the compliance costs would be tremendous given the scale at which social media sites need to moderate content. This in turn would price out the innovators who would not be able to stomach the risks of having fewer bots than Twitter or Facebook, but still have some prevalent. Other proposals, such as the Californian ban on bots pretending to be human, reviving the Fairness Doctrine for online content, or antitrust action, range from unenforceable to counterproductive.

As iTunes, Spotify, Netflix, and other digital media platforms were innovating in the ways to deliver content to consumers, piracy enforcement gained strength to limit copyright violations, to little effect. While piracy as a problem may not have disappeared, it is clear that regulatory efforts to crack it down contributed little, since the demand for pirated content did not stem purely from the medium of its transmission. Bots do not proliferate because of social media, but because of declining social trust. Rebuilding that trust requires building the new, not constraining the old.

 

This week I will be traveling to Montreal to participate in the 2018 G7 Multistakeholder Conference on Artificial Intelligence. This conference follows the G7’s recent Ministerial Meeting on “Preparing for the Jobs of the Future” and will also build upon the G7 Innovation Ministers’ Statement on Artificial Intelligence. The goal of Thursday’s conference is to, “focus on how to enable environments that foster societal trust and the responsible adoption of AI, and build upon a common vision of human-centric AI.” About 150 participants selected by G7 partners are expected to participate, and I was invited to attend as a U.S. expert, which is a great honor. 

I look forward to hearing and learning from other experts and policymakers who are attending this week’s conference. I’ve been spending a lot of time thinking about the future of AI policy in recent books, working papers, essays, and debates. My most recent essay concerning a vision for the future of AI policy was co-authored with Andrea O’Sullivan and it appeared as part of a point/counterpoint debate in the latest edition of the Communications of the ACM. The ACM is the Association for Computing Machinery, the world’s largest computing society, which “brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges.” The latest edition of the magazine features about a dozen different essays on “Designing Emotionally Sentient Agents” and the future of AI and machine-learning more generally.

In our portion of the debate in the new issue, Andrea and I argue that “Regulators Should Allow the Greatest Space for AI Innovation.” “While AI-enabled technologies can pose some risks that should be taken seriously,” we note, “it is important that public policy not freeze the development of life-enriching innovations in this space based on speculative fears of an uncertain future.” We contrast two different policy worldviews — the precautionary principle versus permissionless innovation — and argue that:

artificial intelligence technologies should largely be governed by a policy regime of permissionless innovation so that humanity can best extract all of the opportunities and benefits they promise. A precautionary approach could, alternatively, rob us of these life-saving benefits and leave us all much worse off.

That’s not to say that AI won’t pose some serious policy challenges for us going forward that deserve serious attention. Rather, we are warning against the dangers of allowing worst-case thinking to be the default position in these discussions. Continue reading →