Below are the top 10 posts on the Technology Liberation Front in 2018. Everything from privacy, to 5G, to tech monopolies, and net neutrality. Enjoy, and Happy New Year!

10. How Well-Intentioned Privacy Regulation Could Boost Market Power of Facebook & Google, April 25.

9. Nationalizing 5G networks? Why that’s a bad idea., January 29. (Republished at The Federalist.)

8. The Pacing Problem, the Collingridge Dilemma & Technological Determinism, August 16.

7. GDPR Compliance: The Price of Privacy Protections, July 9.

6. Evasive Entrepreneurialism and Technological Civil Disobedience: Basic Definitions, July 10.

5. No, “83% of Americans” do not support the 2015 net neutrality regulations, May 18.

4. The FCC can increase 5G deployment by empowering homeowners, July 26.

3. Doomed to fail: “net neutrality” state laws, February 20.

2. Should We Teach Children to Be Entrepreneurs, or How to Pay Licensing Fees?, Aug. 21.

1. The Week Facebook Became a Regulated Monopoly (and Achieved Its Greatest Victory in the Process), April 10.

It is now been a year since network neutrality rules supported by Title II were officially repealed, marking the end of the Obama-era legislation. Writing in Wired, Klint Finley noted that, “The good news is that the internet isn’t drastically different than it was before. But that’s also the bad news: The net wasn’t always so neutral to begin with.”

At the time, many worried what would happen. Apple co-founder Steve Wozniak and former FCC Commissioner Michael Copps suggested that two worlds were possible. “Will consumers and citizens control their online experiences, or will a few gigantic gatekeepers take this dynamic technology down the road of centralized control, toll booths and constantly rising prices for consumers?”
Continue reading →

One year ago, the FCC majority passed the 2017 Restoring Internet Freedom Order, largely overturning the 2015 Open Internet Order. I consider the 2017 Order the most significant FCC action in a generation. The FCC did a rare thing for an agency—it voluntarily narrowed its authority to regulate a powerful and massive industry.

In addition to returning authority to the Federal Trade Commission and state attorneys general, the 2017 Order restored common-sense regulatory humility, despite the court’s blessing the Obama FCC’s unconvincing, expansive interpretation of FCC authority. National policy, codified in law, is that the Internet and Internet services should be “unfettered by Federal or State regulation,” which, if it means anything, means Internet services cannot be regulated as common carriers.

Net neutrality is dead

Net neutrality advocates who want the FCC to have common carriage powers over Internet applications and networking practices were outraged by the approval of the 2017 Order. Joe Kane at R Street has a good roundup of some of the death-of-the-Internet hyperbole from the political class and advocates. Some disturbed net neutrality supporters took it too far, including threats to the lives and families of the Republican commissioners, especially Chairman Pai.

But the 2017 Order hadn’t killed net neutrality. It was already dead. A few hours after the passage of the Restoring Internet Freedom Order, I was on a net neutrality panel in DC for an event about the First Amendment and the Internet. (One of my co-panelists dropped out out of caution because of the credible bomb threat at the FCC that day.) I pointed out at that event that while you wouldn’t know it from the news coverage, the Obama FCC had already killed net neutrality’s core principle—the prohibition against content blocking. The 2015 “net neutrality” Order allowed ISPs to block content. Attributing things to the 2015 Order that it simply doesn’t do is what Commissioner Carr has called the “Title II head fake.” The 2017 Order simply freed ISPs and app companies to invest and innovate without fear of plodding scrutiny and inconclusive findings from a far-off FCC bureau.

Long live net neutrality

The net neutrality movement will live on, however. The main net neutrality proponents aren’t that concerned with ISP content blocking; they want FCC regulation of the Internet companies and new media. It’s no coincidence that most of the prominent net neutrality advocates come out of the media access movement, which urged the FCC’s Fairness Doctrine, equal time laws, and programming mandates for TV and radio broadcasts.

The newer net neutrality coalition, as then-FCC Chairman Wheeler conceded frankly, doesn’t know precisely what Internet regulation would look like. What they do know is that ISPs and Internet companies are operating with inadequate public supervision and government design. 

As Public Knowledge CEO Gene Kimmelman has said, the 2015 Order was about threatening the industry with vague but severe rules: “Legal risk and some ambiguity around what practices will be deemed ‘unreasonably discriminatory’ have been effective tools to instill fear for the last 20 years” for the telecom industry. Title II functions, per Kimmelman, as a “way[] to keep the shadow and the fear of ‘going too far’ hanging over the dominant ISPs.” Internet regulation advocates, he said at the time, “have to have fight after fight over every claim of discrimination, of new service or not.”

So it’s Internet regulation, not strict net neutrality, that is driving the movement. As former Obama administration and FCC adviser Kevin Werbach said last year, “It’s not just broadband providers that are fundamental public utilities, at some level Google is, at some level Facebook is, at some level Amazon is.” 

Fortunately, because of the Restoring Internet Freedom Order, IP networks and apps companies have a few years of regulatory reprieve at a critical time. Net neutrality was invented in 2003 and draws on common carriage principles that cannot be applied sensibly to the various services carried on IP networks. Unlike the “single app” phone network regulated with common carriage, these networks transmit thousands of services and apps–like VoIP, gaming, conferencing, OTT video, IPTV, VoLTE, messaging, and Web–that require various technologies, changing topologies, and different quality-of-service requirements. 5G wireless will only accelerate the service differentiation that is at severe tension with net neutrality norms.

Rather than distract agency staff and the Internet industry with metaphysical debates about “reasonable network” practices, the Trump FCC has prioritized network investment, spectrum access, and rural broadband. Hopefully the next year is like the last.

Addendum: The net neutrality reprieve has not only freed up FCC staff to work on more pressing matters, it’s freed  up my time to write about tech policy areas that the public will benefit from. In November I published a Mercatus working paper and a Wall Street Journal op-ed about flying car policy.

Autonomous vehicles are quickly becoming a reality. Waymo just launched a driverless taxi service in Arizona. Part of GM’s cuts were based on a decision to refocus their efforts around autonomous vehicle technology. Tesla seems to repeatedly be promising more and more features that take us closer than ever to a self-driving future. Much of this progress has been supported by the light touch approach that has been taken by both state and federal regulators up to this point. This approach has allowed the technology to rapidly develop, and the potential impact of federal legislation that might detour this progress should be cautiously considered.

For over a year, the Senate has considered passing federal legislation for autonomous vehicle technology, the AV START Act, after similar legislation already passed the House of Representatives. This bill would clarify the appropriate roles for state and federal authorities and preempting some state actions when it comes to regulating autonomous vehicles and will hopefully end some of the patchwork problems that have emerged. While federal legislation regarding preemption may be necessary for autonomous vehicles to truly revolutionize transportation, other parts of the bill could create increased regulatory burdens that actually add speed bumps on the path of this life-saving innovation.

Continue reading →

Bots and Pirates

by on December 4, 2018 · 0 comments

A series of recent studies have shown the centrality of social media bots to the spread of “low credibility” information online. Automated amplification, the process by which bots help share each other’s content, allows these algorithmic manipulators to spread false information across social media in seconds by increasing visibility. These findings, combined with the already rising public perception of social media as harmful to democracy, are likely to motivate some Congressional action regarding social media practices. In a divided Congress, one thing that seems to be drawing more bipartisan support is an antagonism to Big Tech.

Regulating social media to stop misinformation would mistake the symptoms of an illness for its cause. Bots spreading low quality content online is not a cause for declining social trust, but a result of it. Actions that explicitly restrict access to this type of information would likely result in the opposite of their intended effect; allowing people to believe more radical conspiracies and claim that the truth is censored.

A parallel for the prevalence of bots spreading information today is the high rates of media piracy that lasted from the late-1990s through the mid-2000s, but experienced a significant decline throughout this past decade (many of the claims by anti-piracy advocates of consistently rising US piracy fail to acknowledge the rise in file sizes of high quality downloads and the expansion of internet access, as a relative total of content consumption it was historically declining). Content piracy and automated amplification by bots share a relationship through their fulfillment of consumer demand. Just as nobody would pirate videos if there were not some added value over legal video access, bots would not be able to generate legitimate engagement solely by gaming algorithms. There exists a gap in the market to serve consumers the type of content that they desire in a convenient, easy-to-access form.

This fulfilment of market demand is what changed consumer interest in piracy, and it is what is needed to change interest in “low credibility” content. In the early days of the MP3 file format the music industry strongly resisted changing their business models, which led to the proliferation of file sharing sites like Napster. While lawsuits may have shut down individual file sharing sites, they did not alter the demand for pirated content, and piracy persisted. The music industry’s begrudging adoption of iTunes began to change these incentives, but pirated music streaming persisted. It was with legal streaming services like Spotify that piracy began to decline as consumers began to receive what they asked for from legitimate sources: convenience and cheap access to content. It is important to note that pirating in the early days was not convenient, malware and slow download speeds made it a cumbersome affair, but given the laggard nature of media industry incumbents, consumers sought it out nonetheless.

The type of content considered “low credibility” today, similarly, is not convenient, as clickbait and horrible formatting intentionally make such sites painful to use in order to maximize advertising dollars extracted. The fact that consumers still seek these sites out regardless is a testament to the failure of the news industry to cater to consumer demands.

To reduce the efficacy of bots in sharing content, innovation is needed in content production or distribution to ensure convenience, low cost, and subjective user trust. This innovation may come from the social media side through experimentation with subscription services less dependent on advertising revenue. It may come from news media, either through changes in how they cater content to consumers, or through changes in reporting styles to increase engagement. It may even come through a social transformation in how news is consumed. Some thinkers believe that we are entering a reputation age, which would shift the burden of trust from a publication to individual reporters who curate our content. These changes, however, would be hampered by some of the proposed means to curtail bots on social media.

The most prominent proposals to regulate social media regards applying traditional publisher standards to online platforms through the repeal of Section 230 of the Communications Decency Act, which in turn would make platforms liable for the content users post. While this would certainly incentivize more aggressive action against online bots – as well as a wide amount of borderline content – the compliance costs would be tremendous given the scale at which social media sites need to moderate content. This in turn would price out the innovators who would not be able to stomach the risks of having fewer bots than Twitter or Facebook, but still have some prevalent. Other proposals, such as the Californian ban on bots pretending to be human, reviving the Fairness Doctrine for online content, or antitrust action, range from unenforceable to counterproductive.

As iTunes, Spotify, Netflix, and other digital media platforms were innovating in the ways to deliver content to consumers, piracy enforcement gained strength to limit copyright violations, to little effect. While piracy as a problem may not have disappeared, it is clear that regulatory efforts to crack it down contributed little, since the demand for pirated content did not stem purely from the medium of its transmission. Bots do not proliferate because of social media, but because of declining social trust. Rebuilding that trust requires building the new, not constraining the old.

 

This week I will be traveling to Montreal to participate in the 2018 G7 Multistakeholder Conference on Artificial Intelligence. This conference follows the G7’s recent Ministerial Meeting on “Preparing for the Jobs of the Future” and will also build upon the G7 Innovation Ministers’ Statement on Artificial Intelligence. The goal of Thursday’s conference is to, “focus on how to enable environments that foster societal trust and the responsible adoption of AI, and build upon a common vision of human-centric AI.” About 150 participants selected by G7 partners are expected to participate, and I was invited to attend as a U.S. expert, which is a great honor. 

I look forward to hearing and learning from other experts and policymakers who are attending this week’s conference. I’ve been spending a lot of time thinking about the future of AI policy in recent books, working papers, essays, and debates. My most recent essay concerning a vision for the future of AI policy was co-authored with Andrea O’Sullivan and it appeared as part of a point/counterpoint debate in the latest edition of the Communications of the ACM. The ACM is the Association for Computing Machinery, the world’s largest computing society, which “brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges.” The latest edition of the magazine features about a dozen different essays on “Designing Emotionally Sentient Agents” and the future of AI and machine-learning more generally.

In our portion of the debate in the new issue, Andrea and I argue that “Regulators Should Allow the Greatest Space for AI Innovation.” “While AI-enabled technologies can pose some risks that should be taken seriously,” we note, “it is important that public policy not freeze the development of life-enriching innovations in this space based on speculative fears of an uncertain future.” We contrast two different policy worldviews — the precautionary principle versus permissionless innovation — and argue that:

artificial intelligence technologies should largely be governed by a policy regime of permissionless innovation so that humanity can best extract all of the opportunities and benefits they promise. A precautionary approach could, alternatively, rob us of these life-saving benefits and leave us all much worse off.

That’s not to say that AI won’t pose some serious policy challenges for us going forward that deserve serious attention. Rather, we are warning against the dangers of allowing worst-case thinking to be the default position in these discussions. Continue reading →

By Adam Thierer & Jennifer Huddleston Skees

He’s making a list and checking it twice. Gonna find out who’s naughty and nice.”

With the Christmas season approaching, apparently it’s not just Santa who is making a list. The Trump Administration has just asked whether a long list of emerging technologies are naughty or nice — as in whether they should be heavily regulated or allowed to be developed and traded freely.

If they land on the naughty list, these technologies could be subjected to complex export control regulations, which would limit research and development efforts in many emerging tech fields and inadvertently undermine U.S. innovation and competitiveness. Worse yet, it isn’t even clear there would be any national security benefit associated with such restrictions.  

From Light-Touch to a Long List

Generally speaking, the Trump Administration has adopted a “light-touch” approach to the regulation of emerging technology and relied on more flexible “soft law” approaches to high-tech policy matters. That’s what makes the move to impose restrictions on the trade and usage of these emerging technologies somewhat counter-intuitive. On November 19, the Department of Commerce’s Bureau of Industry and Security launched a “Review of Controls for Certain Emerging Technologies.” The notice seeks public comment on “criteria for identifying emerging technologies that are essential to U.S. national security, for example because they have potential conventional weapons, intelligence collection, weapons of mass destruction, or terrorist applications or could provide the United States with a qualitative military or intelligence advantage.” Continue reading →

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that, “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption, Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered, “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses. Continue reading →

Last week, science writer Michael Shermer tweeted out this old xkcd comic strip that I had somehow missed before. Shermer noted that it represented, “another reply to pessimists bemoaning modern technologies as soul-crushing and isolating.” Similarly, there’s this meme that has been making the rounds on Twitter and which jokes about how newspapers made us as antisocial in the past much as newer technologies supposedly do today.

‏The sentiments expressed by the comic and that image make it clear how people often tend to romanticize past technologies or fail to remember that many people expressed the same fears about them as critics do today about newer ones. I’ve written dozens of articles about “moral panics” and “techno-panics,” most of which are cataloged here. The common theme of those essays is that, when it comes to fears about innovations, there really is nothing new under the sun. Continue reading →

Until recently, I wasn’t familiar with Freedom House’s Freedom on the Net reports. Freedom House has useful recommendations for Internet non-regulation and for protecting freedom of speech. Their Freedom on the Net Reports make an attempt at grading a complex subject: national online freedoms.

However, their latest US report came to my attention. Tech publications like TechCrunch and Internet regulation advocates were trumpeting the report because it touched on net neutrality. Freedom House penalized the US score in the US report because the FCC a few months ago repealed the so-called net neutrality rules from 2015.

The authors of the US report reached a curious conclusion: Internet deregulation means a loss of online freedom. In 2015, the FCC classified Internet services as a “Title II” common carrier service. In 2018, the FCC, reversed course, and shifted Internet services from one of the most-regulated industries in the US to one of least-regulated industries. This 2018 deregulation, according to the Freedom House US report, creates an “obstacle to access” and, while the US is still “free,” regulation repeal moves the US slightly in the direction of “digital authoritarianism.”   Continue reading →