Intermediary Deputization & Section 230

The Technology Policy Institute has posted the video of my talk at the 2024 Aspen Forum panel on “How Should we Regulate the Digital World?” My remarks run from 33:33–44:12 of the video. I also elaborate briefly during Q&A.

My remarks at this year’s TPI Aspen Forum panel were derived from my R Street Institute essay, “The Policy Origins of the Digital Revolution & the Continuing Case for the Freedom to Innovate,” which sketches out a pro-freedom vision for the Computational Revolution.

 

Over at Discourse magazine this week, my R Street colleague Jonathan Cannon and I have posted a new essay on how it has been “Quite a Fall for Digital Tech.” We mean that both in the sense that the last few months have witnessed serious market turmoil for some of America’s leading tech companies, but also that the political situation for digital tech more generally has become perilous. Plenty of people on the Left and the Right now want a pound of flesh from the info-tech sector, and the starting cut at the body involves Section 230, the 1996 law that shields digital platforms from liability for content posted by third parties.

With the Supreme Court recently announcing it will hear Gonzalez v. Google, a case that could significantly narrow the scope of Section 230, the stakes have grown higher. It was already the case that federal and state lawmakers were looking to chip away at Sec. 230’s protections through an endless variety of regulatory measures. But if the Court guts Sec. 230 in Gonzalez, then it will really be open season on tech companies, as lawsuits will fly at every juncture whenever someone does not like a particular content moderation decision. Cannon and I note in our new essay that, Continue reading →

My colleague Wayne Brough and I recently went on the “Kibbe on Liberty” show to discuss how to discuss the state of free speech on the internet. We explained how censorship is a Big Government problem, not a Big Tech problem. Here’s the complete description of the show and the link to the full episode is below.

With Elon Musk’s purchase of Twitter, we are in the middle of a national debate about the tension between censorship and free expression online. On the Right, many people are calling for government to rein in what they perceive as the excesses of Big Tech companies, while the Left wants the government to crack down on speech they deem dangerous. Both approaches make the same mistake of giving politicians authority over what we are allowed to say and hear. And with recent revelations about government agents leaning on social media companies to censor speech, it’s clear that when it comes to the online conversation, there’s no such thing as a purely private company.”

For more on this issues, please see: “The Classical Liberal Approach to Digital Media Free Speech Issues.”

By: Jennifer Huddleston and Juan Martin Londoño

This year the E3 conference streamed live over Twitch, YouTube, and other online platforms—a reality that highlights the growing importance of platforms and user-generated content to the gaming industry. From streaming content on Twitch, to sharing mods on Steam Workshop, or funding small developing studios on services such as Patreon or Kickstarter, user-generated content has proven vital for the gaming ecosystem. While these platforms have allowed space for creative interaction—which we saw on the livestreams chats during E3—the legal framework that allows all of this interaction is under threat, and changes to a critical internet law could spell Game Over for user-created gaming elements.

 

This law, “Section 230,” is foundational to all user-generated content on the internet. Section 230 protects platforms from lawsuits over both the content they host as well as their moderation decisions, giving them the freedom to curate and create the kind of environment that best fits its customers. This policy is under attack, however, from policymakers on both sides of the aisle. Some Democrats argue platforms are not moderating enough content, thus allowing hate speech and voter suppression to thrive, while some Republicans believe platforms are moderating too much, which promotes “cancel culture” and the limitation of free speech.

 

User-generated content and the platforms that host it have contributed significantly to the growth of the gaming industry since the early days of the internet. This growth has only accelerated during the pandemic, as in 2020 the gaming industry grew 20 percent to a whopping $180 billion market. But changing Section 230 could seriously disrupt user-generated engagement with gaming, making content moderation costlier and riskier for some of gamers’ favorite platforms.

Continue reading →

Over at Discourse magazine I’ve posted my latest essay on how conservatives are increasingly flirting with the idea of greatly expanding regulatory control of private speech platforms via some sort of common carriage regulation or new Fairness Doctrine for the internet. It begins:

Conservatives have traditionally viewed the administrative state with suspicion and worried about their values and policy prescriptions getting a fair shake within regulatory bureaucracies. This makes their newfound embrace of common carriage regulation and media access theory (i.e., the notion that government should act to force access to private media platforms because they provide an essential public service) somewhat confusing. Recent opinions from Supreme Court Justice Clarence Thomas as well as various comments and proposals of Sen. Josh Hawley and former President Trump signal a remarkable openness to greater administrative control of private speech platforms.

Given the takedown actions some large tech companies have employed recently against some conservative leaders and viewpoints, the frustration of many on the right is understandable. But why would conservatives think they are going to get a better shake from state-regulated monopolists than they would from today’s constellation of players or, more importantly, from a future market with other players and platforms?

I continue on to explain why conservatives should be skeptical of the administrative state being their friend when it comes to the control of free speech. I end by reminding conservatives what President Ronald Reagan said in his 1987 veto of legislation to reestablish the Fairness Doctrine: “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.”

Read more at Discourse, and down below you will find several other recent essays I’ve written on the topic.

In a five-part series at the American Action Forum, I presented prior to the 2020 presidential election the candidates’ positions on a range of tech policy topics including: the race to 5GSection 230antitrust, and the sharing economy. Now that the election is over, it is time to examine what topics in tech policy will gain more attention and how the debate around various tech policy issues may change. In no particular order, here are five key tech policy issues to be aware of heading into a new administration and a new Congress. 

The Use of Soft Law for Tech Policy 

In 2021, it is likely America will still have a divided government with Democrats controlling the White House and House of Representatives and Republicans expected to narrowly control the Senate. The result of a divided government, particularly between the two houses of Congress, will likely be that many tech policy proposals face logjams. The result will likely be that many of the questions of tech policy lack the legislation or hard law framework that might be desired. As a result, we are likely to continue to see “soft law”—regulation by various sub-regulatory means such as guidance documents, workshops, and industry consultations—rather than formal action. While it appears we will see more formal regulatory action from the administrative state as well in a Biden Administration, these actions require quite a process through comments and formal or informal rulemaking. As technology continues to accelerate, many agencies turn to soft law to avoid “pacing problems” where policy cannot react as quickly as technology and rules may be outdated by the time they go into effect. 

A soft law approach can be preferable to a hard law approach as it is often able to better adapt to rapidly changing technologies. Policymakers in this new administration, however, should work to ensure that they are using this tool in a way that enables innovation and that appropriate safeguards ensure that these actions do not become a crushing regulatory burden. 

Return of the Net Neutrality Debate 

One key difference between President Trump and President-elect Biden’s stances on tech policy concerns whether the Federal Communication Commission (FCC) should categorize internet service providers (ISPs) as Title II “common carrier services,” thereby enabling regulations such as “net neutrality” that places additional requirements on how these service providers can prioritize data. President-elect Biden has been clear in the past that he favors reinstating net neutrality. 

The imposition of this classification and regulations occurred during the Obama Administration and the FCC removed both the classification under Title II and the additional regulations for “net neutrality” during the Trump Administration. Critics of these changes made many hyperbolic claims at the time such as that Netflix would be interrupted or that ISPs would use the freedom in a world without net neutrality to block abortion resources or pro-feminist groups. These concerns have proven to be misguided. If anything, the COVID-19 pandemic has shown the benefits to building a robust internet infrastructure and expanded investment that a light-touch approach has yielded. 

It is likely that net neutrality will once again be debated. Beyond just the imposition of these restrictions, a repeated change in such a key classification could create additional regulatory uncertainty and deter or delay investment and innovation in this valuable infrastructure. To overcome such concerns, congressional action could help fashion certainty in a bipartisan and balanced way to avoid a back-and-forth of such a dramatic nature. 

Debates Regarding Sharing Economy Providers Classification as Independent Contractors 

California voters passed Proposition 22 undoing the misguided reclassification of app-based service drivers as employees rather than independent contractors under AB5; during the campaign, however, President-elect Biden stated that he supports AB5 and called for a similar approach nationwide. Such an approach would make it more difficult on new sharing economy platforms and a wide range of independent workers (such as freelance journalists) at a time when the country is trying to recover economically.  

Changing classifications to make it more difficult to consider service providers as independent contractors makes it less likely that platforms such as Fiverr or TaskRabbit could provide platforms for individuals to offer their skills. This reclassification as employees also misunderstands the ways in which many people choose to engage in gig economy work and the advantages that flexibility has. As my AAF colleague Isabel Soto notes, the national costs of a similar approach found in the Protecting the Right to Organize (PRO) Act “could see between $3.6 billion and $12.1 billion in additional costs to businesses” at a time when many are seeking to recover during the recession. Instead, both parties should look for solutions that continue to allow the benefits of the flexible arrangements that many seek in such work, while allowing for creative solutions and opportunities for businesses that wish to provide additional benefits to workers without risking reclassification. 

Shifting Conversations and Debates Around Section 230 

Section 230 has recently faced most of its criticism from Republicans regarding allegations of anti-conservative bias. President-elect Biden, however, has also called to revoke Section 230 and to set up a taskforce regarding “Online Harassment and Abuse.” While this may seem like a positive step to resolving concerns about online content, it could also open the door to government intervention in speech that is not widely agreed upon and chip away at the liability protection for content moderation. 

For example, even though the Stop Enabling Sex Trafficking Act was targeting the heinous crime of sex trafficking (which was already not subject to Section 230 protection) was aimed at companies such as Backpage where it was known such illegal activity was being conducted, it has resulted in legitimate speech such as Craigslist personal ads being removed  and companies such as Salesforce being subjected to lawsuits for what third parties used their product for. A carveout for hate speech or misinformation would only pose more difficulties for many businesses. These terms to do not have clearly agreed-upon meanings and often require far more nuanced understanding for content moderation decisions. To enforce changes that limit online speech even on distasteful and hateful language in the United States would dramatically change the interpretation of the First Amendment that has ruled such speech is still protected and would result in significant intrusion by the government for it to be truly enforced. For example, in the UK, an average of nine people a day were questioned or arrested over offensive or harassing “trolling” in online posts, messages, or forums under a law targeting online harassment and abuse such as what the taskforce would be expected to consider. 

Online speech has provided new ways to connect, and Section 230 keeps the barriers to entry low. It is fair to be concerned about the impact of negative behavior, but policymakers should also recognize the impact that online spaces have had on allowing marginalized communities to connect and be concerned about the unintended consequences changes to Section 230 could have. 

Continued Antitrust Scrutiny of “Big Tech” 

One part of the “techlash” that shows no sign of diminishing in the new administration or new Congress is using antitrust to go after “Big Tech.” While it remains to be seen if the Biden Department of Justice will continue the current case against Google, there are indications that they and congressional Democrats will continue to go after these successful companies with creative theories of harm that do not reflect the current standards in antitrust. 

Instead of assuming a large and popular company automatically merits competition scrutiny  or attempting to utilize antitrust to achieve policy changes for which it is an ill-fitted tool, the next administration should return to the principled approach of the consumer welfare standard. Under such an approach, antitrust is focused on consumers and not competitors. In this regard, companies would need to be shown to be dominant in their market, abusing that dominance in some ways, and harming consumers. This approach also provides an objective standard that lets companies and consumers know how actions will be considered under competition law. With what is publicly known, the proposed cases against the large tech companies fail at least one element of this test. 

There will likely be a shift in some of the claimed harms, but unfortunately scrutiny of large tech companies and calls to change antitrust laws to go after these companies are likely to continue. 

Conclusion 

There are many other technology and innovation issues the next administration and Congress will see. These include not only the issues mentioned above, but emerging technologies like 5G, the Internet of Things, and autonomous vehicles. Other issues such as the digital divide provide an opportunity for policymakers on both sides of the aisle to come together and have a beneficial impact and think of creative and adaptable solutions. Hopefully, the Biden Administration and the new Congress will continue a light-touch approach that allows entrepreneurs to engage with innovative ideas and continues American leadership in the technology sector. 

Last week I attended the Section 230 cage match workshop at the DOJ. It was a packed house, likely because AG Bill Barr gave opening remarks. It was fortuitous timing for me: my article with Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation, was published 24 hours before the workshop by the Oklahoma Law Review.

These were my impressions of the event:

I thought it was pretty well balanced event and surprisingly civil for such a contentious topic. There were strong Section 230 defenders and strong Section 230 critics, and several who fell in between. There were a couple cheers after a few pointed statements from panelists, but the audience didn’t seem to fall on one side or the other. I’ll add that my friend and co-blogger Neil Chilson gave an impressive presentation about how Section 230 helped make the “long tail” of beneficial Internet-based communities possible.

AG Bob Barr gave the opening remarks, which are available online. A few things jumped out. He suggested that Section 230 had its place but Internet companies are not an infant industry anymore. In his view, the courts have expanded Section 230 beyond drafters’ intent, and the Reno decision “unbalanced” the protections, which were intended to protect minors. The gist of his statement was that the law needs to be “recalibrated.”

Each of these points were disputed by one or more panelists, but the message to the Internet industry was clear: the USDOJ is scrutinizing industry concentration and its relationship to illegal and antisocial online content.

The workshop signals that there is now a large, bipartisan coalition that would like to see Section 230 “recalibrated.” The problem for this coalition is that they don’t agree on what types of content providers should be liable for and they are often at cross-purposes. The problematic content ranges from sex trafficking, to stalkers, to opiate trafficking, to revenge porn, to unfair political ads. For conservatives, social media companies take down too much content, intentionally helping progressives. For progressives, social media companies leave up too much content, unwittingly helping conservatives.

I’ve yet to hear a convincing way to modify Section 230 that (a) satisfies this shaky coalition, (b) would be practical to comply with, and (c) would be constitutional.

Now, Section 230 critics are right: the law blurs the line between publisher and conduit. But this is not unique to Internet companies. The fact is, courts (and federal agencies) blurred the publisher-conduit dichotomy for fifty years for mass media distributors and common carriers as technology and social norms changed. Some cases that illustrate the phenomenon:

In Auvil v. CBS 60 Minutes, a 1991 federal district court decision, some Washington apple growers sued some local CBS affiliates for airing allegedly defamatory programming. The federal district court dismissed the case on the grounds that the affiliates are conduits of CBS programming. Critically, the court recognized that the CBS affiliates “had the power to” exercise editorial control over the broadcast and “in fact occasionally [did] censor programming . . . for one reason or another.” Still, case dismissed. The principle has been cited by other courts. Publishers can be conduits.

Conduits can also be publishers. In 1989, Congress passed a law requiring phone providers to restrict “dial-a-porn” services to minors. Dial-a-porn companies sued. In Information Providers Coalition v. FCC, the 9th Circuit Court of Appeals held that regulated common carriers are “free under the Constitution to terminate service” to providers of indecent content. The Court relied on its decision a few years earlier in Carlin Communications noting that when a common carrier phone company is connecting thousands of subscribers simultaneously to the same content, the “phone company resembles less a common carrier than it does a small radio station.”

Many Section 230 reformers believe Section 230 mangled the common law would like to see the restoration of the publisher-conduit dichotomy. As our research shows, that dichotomy had already been blurred for decades. Until advocates and lawmakers acknowledge these legal trends and plan accordingly, the reformers risk throwing out the baby with the bathwater.

Relevant research:
Brent Skorup & Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation (Oklahoma Law Review).

Brent Skorup & Joe Kane, The FCC and Quasi–Common Carriage: A Case Study of Agency Survival (Minnesota Journal of Law, Science & Technology).

Bots and Pirates

by on December 4, 2018 · 0 comments

A series of recent studies have shown the centrality of social media bots to the spread of “low credibility” information online. Automated amplification, the process by which bots help share each other’s content, allows these algorithmic manipulators to spread false information across social media in seconds by increasing visibility. These findings, combined with the already rising public perception of social media as harmful to democracy, are likely to motivate some Congressional action regarding social media practices. In a divided Congress, one thing that seems to be drawing more bipartisan support is an antagonism to Big Tech.

Regulating social media to stop misinformation would mistake the symptoms of an illness for its cause. Bots spreading low quality content online is not a cause for declining social trust, but a result of it. Actions that explicitly restrict access to this type of information would likely result in the opposite of their intended effect; allowing people to believe more radical conspiracies and claim that the truth is censored.

A parallel for the prevalence of bots spreading information today is the high rates of media piracy that lasted from the late-1990s through the mid-2000s, but experienced a significant decline throughout this past decade (many of the claims by anti-piracy advocates of consistently rising US piracy fail to acknowledge the rise in file sizes of high quality downloads and the expansion of internet access, as a relative total of content consumption it was historically declining). Content piracy and automated amplification by bots share a relationship through their fulfillment of consumer demand. Just as nobody would pirate videos if there were not some added value over legal video access, bots would not be able to generate legitimate engagement solely by gaming algorithms. There exists a gap in the market to serve consumers the type of content that they desire in a convenient, easy-to-access form.

This fulfilment of market demand is what changed consumer interest in piracy, and it is what is needed to change interest in “low credibility” content. In the early days of the MP3 file format the music industry strongly resisted changing their business models, which led to the proliferation of file sharing sites like Napster. While lawsuits may have shut down individual file sharing sites, they did not alter the demand for pirated content, and piracy persisted. The music industry’s begrudging adoption of iTunes began to change these incentives, but pirated music streaming persisted. It was with legal streaming services like Spotify that piracy began to decline as consumers began to receive what they asked for from legitimate sources: convenience and cheap access to content. It is important to note that pirating in the early days was not convenient, malware and slow download speeds made it a cumbersome affair, but given the laggard nature of media industry incumbents, consumers sought it out nonetheless.

The type of content considered “low credibility” today, similarly, is not convenient, as clickbait and horrible formatting intentionally make such sites painful to use in order to maximize advertising dollars extracted. The fact that consumers still seek these sites out regardless is a testament to the failure of the news industry to cater to consumer demands.

To reduce the efficacy of bots in sharing content, innovation is needed in content production or distribution to ensure convenience, low cost, and subjective user trust. This innovation may come from the social media side through experimentation with subscription services less dependent on advertising revenue. It may come from news media, either through changes in how they cater content to consumers, or through changes in reporting styles to increase engagement. It may even come through a social transformation in how news is consumed. Some thinkers believe that we are entering a reputation age, which would shift the burden of trust from a publication to individual reporters who curate our content. These changes, however, would be hampered by some of the proposed means to curtail bots on social media.

The most prominent proposals to regulate social media regards applying traditional publisher standards to online platforms through the repeal of Section 230 of the Communications Decency Act, which in turn would make platforms liable for the content users post. While this would certainly incentivize more aggressive action against online bots – as well as a wide amount of borderline content – the compliance costs would be tremendous given the scale at which social media sites need to moderate content. This in turn would price out the innovators who would not be able to stomach the risks of having fewer bots than Twitter or Facebook, but still have some prevalent. Other proposals, such as the Californian ban on bots pretending to be human, reviving the Fairness Doctrine for online content, or antitrust action, range from unenforceable to counterproductive.

As iTunes, Spotify, Netflix, and other digital media platforms were innovating in the ways to deliver content to consumers, piracy enforcement gained strength to limit copyright violations, to little effect. While piracy as a problem may not have disappeared, it is clear that regulatory efforts to crack it down contributed little, since the demand for pirated content did not stem purely from the medium of its transmission. Bots do not proliferate because of social media, but because of declining social trust. Rebuilding that trust requires building the new, not constraining the old.

 

A few states have passed Internet regulations because the Trump FCC, citing a 20 year US policy of leaving the Internet “unfettered by Federal or State regulation,” decided to reverse the Obama FCC’s 2015 decision to regulate the Internet with telephone laws.

Those state laws regulating Internet traffic management practices–which supporters call “net neutrality”–are unlikely to survive lawsuits because the Internet and Internet services are clearly interstate communications and FCC authority dominates. (The California bill also likely violates federal law concerning E-Rate-funded Internet access.) 

However, litigation can take years. In the meantime ISP operators will find they face fewer regulatory headaches if they do exactly what net neutrality supporters believe the laws prohibit: block Internet content. Net neutrality laws in the US don’t apply to ISPs that “edit the Internet.”

The problem for net neutrality supporters is that Internet service providers, like cable TV providers, are protected by the First Amendment. In fact, Internet regulations with a nexus to content are subject to “strict scrutiny,” which typically means regulations are struck down. Even leading net neutrality proponents, like the ACLU and EFF, endorse the view that ISP curation is expressive activity protected by First Amendment.

As I’ve pointed out, these First Amendment concerns were raised during the 2016 litigation and compelled the Obama FCC to clarify that its 2015 “net neutrality” Order allows ISPs to block content. As a pro-net neutrality journalist recently wrote in TechCrunch about the 2015 rules, 

[A] tiny ISP in Texas called Alamo . . . wanted to offer a “family-friendly” edited subset of the internet to its customers.

Funnily enough, this is permitted! And by publicly stating that it has no intention of providing access to “substantially all Internet endpoints,” Alamo would exempt itself from the net neutrality rules! Yes, you read that correctly — an ISP can opt out of the rules by changing its business model. They are . . . essentially voluntary.

The author wrote this to ridicule Judge Kavanaugh, but the joke is clearly not on Kavanuagh.

In fact, under the 2015 Order, filtered Internet service was less regulated than conventional Internet service. Note that the rules were “essentially voluntary”–ISPs could opt out of regulation by filtering content. The perverse incentive of this regulatory asymmetry, whereby the FCC would regulate conventional broadband heavily but not regulate filtered Internet at all, was cited by the Trump FCC as a reason to eliminate the 2015 rules. 

State net neutrality laws basically copy and paste from the 2015 FCC regulations and will have the same problem: Any ISP that forthrightly blocks content it doesn’t wish to transmit–like adult content–and edits the Internet is unregulated.

This looks bad for net neutrality proponents leading the charge, so they often respond that the Internet regulations cover the “functional equivalent” of conventional (heavily regulated) Internet access. Therefore, the story goes, regulators can stop an ISP from filtering because an edited Internet is the functional equivalent of an unedited Internet.

Curiously, the Obama FCC didn’t make this argument in court. The reason the Obama FCC didn’t endorse this “functional equivalent” response is obvious. Let’s play this out: An ISP markets and offers a discounted “clean Internet” package because it knows that many consumers would appreciate it. To bring the ISP back into the regulated category, regulators sue, drag the ISP operators into court, and tell judges that state law compels the operator to transmit adult content.

This argument would receive a chilly reception in court. More likely is that state regulators, in order to preserve some authority to regulate the Internet, will simply concede that filtered Internet drops out of regulation, like the Obama FCC did.

As one telecom scholar wrote in a Harvard Law publication years ago, “net neutrality” is dead in the US unless there’s a legal revolution in the courts. Section 230 of the Telecom Act encourages ISPs to filter content and the First Amendment protects ISP curation of the Internet. State law can’t change that. The open Internet has been a net positive for society. However, state net neutrality laws may have the unintended effect of encouraging ISPs to filter. This is not news if you follow the debate closely, but rank-and-file net neutrality advocates have no idea. The top fear of leading net neutrality advocates is not ISP filtering, it’s the prospect that the Internet–the most powerful media distributor in history–will escape the regulatory state.

Lawmakers frequently hear impressive-sounding stats about net neutrality like “83% of voters support keeping FCC’s net neutrality rules.” This 83% number (and similar “75% of Republicans support the rules”) is based on a survey from the Program for Public Consultation released in December 2017, right before the FCC voted to repeal the 2015 Internet regulations.

These numbers should be treated with skepticism. This survey generates these high approval numbers by asking about net neutrality “rules” found nowhere in the 2015 Open Internet Order. The released survey does not ask about the substance of the Order, like the Title II classification, government price controls online, or the FCC’s newly-created authority to approve of and disapprove of new Internet services.

Here’s how the survey frames the issue:

Under the current regulations, ISPs are required to:   

provide customers access to all websites on the internet.   

provide equal access to all websites without giving any websites faster or slower download speeds.  

The survey then essentially asks the participant if they favor these “regulations.” The nearly 400-page Order is long and complex and I’m guessing the survey creators lacked expertise in this area because this is a serious misinterpretation of the Order. This framing is how net neutrality advocates discuss the issue, but the Obama FCC’s interpretations of the 2015 Order look nothing like these survey questions. Exaggeration and misinformation is common when discussing net neutrality and unfortunately these pollsters contributed to it. (The Washington Post Fact Checker column recently assigned “Three Pinocchios” to similar net neutrality advocate claims.)

Let’s break down these rules ostensibly found in the 2015 Order.

“ISPs are required to provide customers access to all websites on the internet”

This is wrong. The Obama FCC was quite clear in the 2015 Order and during litigation that ISPs are free to filter the Internet and block websites. From the oral arguments:

FCC lawyer: “If [ISPs] want to curate the Internet…that would drop them out of the definition of Broadband Internet Access Service.”
Judge Williams: “They have that option under the Order?”
FCC lawyer: “Absolutely, your Honor. …If they filter the Internet and don’t provide access to all or substantially all endpoints, then…the rules don’t apply to them.”

As a result, the judges who upheld the Order said, “The Order…specifies that an ISP remains ‘free to offer ‘edited’ services’ without becoming subject to the rule’s requirements.”

Further, in the 1996 Telecom Act, Congress gave Internet access providers legal protection in order to encourage them to block lewd and “objectionable content.” Today, many ISPs offer family-friendly Internet access that blocks, say, pornographic and violent content. An FCC Order cannot and did not rewrite the Telecom Act and cannot require “access to all websites on the internet.”

“ISPs are required to provide equal access to all websites without giving any websites faster or slower download speeds”

Again, wrong. There is no “equal access to all websites” mandate (see above). Further, the 2015 Order allows ISPs to prioritize certain Internet traffic because preventing prioritization online would break Internet services.

This myth–that net neutrality rules require ISPs to be dumb pipes, treating all bits the same–has been circulated for years but is derided by networks experts. MIT computer scientist and early Internet developer David Clark colorfully dismissed this idea as “happy little bunny rabbit dreams.” He pointed out that prioritization has been built into Internet protocols for years and “[t]he network is not neutral and never has been.” 

Other experts, such as tech entrepreneur and investor Mark Cuban and President Obama’s former chief technology officer Aneesh Chopra, have observed that the need for Internet “fast lanes” as Internet services grow more diverse. Further, the nature of interconnection agreements and content delivery networks mean that some websites pay for and receive better service than others.

This is not to say the Order is toothless. It authorizes government price controls and invents a vague “general conduct standard” that gives the agency broad authority to reject, favor, and restrict new Internet services. The survey, however, declined to ask members of the public about the substance of the 2015 rules and instead asked about support for net neutrality slogans that have only a tenuous relationship with the actual rules.

“Net neutrality” has always been about giving the FCC, the US media regulator, vast authority to regulate the Internet. In doing so, the 2015 Order rejects the 20-year policy of the United States, codified in law, that the Internet and Internet services should be “unfettered by Federal or State regulation.” The US tech and telecom sector thrived before 2015 and the 2017 repeal of the 2015 rules will reinstate, fortunately, that light-touch regulatory regime.