Yesterday, the Supreme Court dropped a decision in Wayfair v. South Dakota, a case on the issue of online sales tax. As always, the holding is key: “Because the physical presence rule of Quill is unsound and incorrect, Quill Corp. v. North Dakota, 504 U. S. 298, and National Bellas Hess, Inc. v. Department of Revenue of Ill., 386 U. S. 753, are overruled.” What follows below is a roundup of reactions and comments to the decision. Continue reading →

Two years ago, ProPublica initiated a conversation over the use of risk assessment algorithms when they concluded that a widely used “score proved remarkably unreliable in forecasting violent crime” in Florida. Their examination of the racial disparities in scoring has been cited countless times, often as a proxy for the power of automation and algorithms in daily life. Indeed, as the authors concluded, these scores are “part of a part of a larger examination of the powerful, largely hidden effect of algorithms in American life.”

As this examination continues, two precepts are worth keeping in mind. First, the social significance of algorithms needs to be considered, not just their internal model significance. While the accuracy of algorithms are important, more emphasis should be placed on how they are used within institutional settings. And second, fairness is not a single idea. Mandates for certain kinds of fairness could come at the expense of others forms of fairness. As always, policymakers need to be cognizant of the trade offs.   Continue reading →

The National Academies of Sciences, Engineering, and Medicine has released an amazing new report focused on, “Assessing the Risks of Integrating Unmanned Aircraft Systems (UAS) into the National Airspace System.” In what the Wall Street Journal rightly refers to as an “unusually strongly worded report,” the group of experts assembled by the National Academies call for a sea change in regulatory attitudes and policies toward regulation of Unmanned Aircraft Systems (or “drones”) and the nation’s airspace more generally.

The report uses the term “conservative” or “overly conservative” more than a dozen times to describe the Federal Aviation Administration’s (FAA) problematic current approach toward drones. They point out that the agency has “a culture with a near-zero tolerance for risk,” and that the agency needs to adjust that culture to take into account “the various ways in which this new technology may reduce risk and save lives.” (Ch. S, p.2) The report continues on to say that:

The committee concluded that “fear of making a mistake” drives a risk culture at the FAA that is too often overly conservative, particularly with regard to UAS technologies, which do not pose a direct threat to human life in the same way as technologies used in manned aircraft. An overly conservative attitude can take many forms. For example, FAA risk avoidance behavior is often rewarded, even when it is excessively risk averse, and rewarded behavior is repeated behavior. Balanced risk decisions can be discounted, and FAA staff may conclude that allowing new risk could endanger their careers even when that risk is so minimal that it does not exceed established safety standards.  The committee concluded that a better measure for the FAA to apply is to ask the question, “Can we make UAS as safe as other background risks that people experience daily?” As the committee notes, we do not ground airplanes because birds fly in the airspace, although we know birds can and do bring down aircraft.

[. . . ]

In many cases, the focus has been on “What might go wrong?” instead of a holistic risk picture: “What is the net risk/benefit?” Closely related to this is what the committee considers to be paralysis wherein ever more data are often requested to address every element of uncertainty in a new technology. Flight experience cannot be gained to generate these data due to overconservatism that limits approvals of these flights. Ultimately, the status quo is seen as safe. There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks. (p. S-2)

Importantly, the report makes it clear that the problem here is not just that “an overly conservative risk culture that overestimates the severity and the likelihood of UAS risk can be a significant barrier to introduction and development of these technologies,” but, more profoundly, the report highlights how,  “Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (p. 3-6,7) In other words, we should want a more open and common sense-oriented approach to drones, not only to encourage more life-enriching innovation, but also because it could actually make us safer as a result.

No Reward without Some Risk

What the National Academies report is really saying here is that there can be no reward without some risk.  This is something I have spent a great deal of time writing about in my last book, a recent book chapter, and various other essays and journal articles over the past 25 years.  As I noted in my last book, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”  If we want a wealthier, healthier, and safer society, we must embrace change and risk-taking to get us there.

This is exactly what that National Academies report is getting at when they note that the FAA”s “overly conservative culture prevents safety beneficial operations from entering the airspace. The focus is on what might go wrong. More dialogue on potential benefits is needed to develop a holistic risk picture that addresses the question, What is the net risk/benefit?” (p. 3-10)

In other words, all safety regulation involves trade-offs, and if (to paraphrase a classic Hardin cartoon you’ll see to your right) we consider every potential risk except the risk of avoiding all risks, the result will be not only a decline in short-term innovation, but also a corresponding decline in safety and overall living standards over time.

Countless risk scholars have studied this process and come to the same conclusion. “We could virtually end all risk of failure by simply declaring a moratorium on innovation, change, and progress,” notes engineering historian Henry Petroski. But the costs to society of doing so would be catastrophic, of course. “The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement,” observed H.L. Lewis, an expert on technological risk trade-offs.

The most important book ever written on this topic was Aaron Wildavsky’s 1988 masterpiece, Searching for Safety. Wildavsky warned of the dangers of “trial without error” reasoning and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that real wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. As he put it:

The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.

When this logic takes the form of public policy prescriptions, it is referred to as the “precautionary principle,” which generally holds that, because new ideas or technologies could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.

Again, if we adopt that attitude, human safety actually suffers because it holds back beneficial experiments aimed at improving the human condition. As the great economic historian Joel Mokyr argues, “technological progress requires above all tolerance toward the unfamiliar and the eccentric.” But the regulatory status quo all too often rejects “the unfamiliar and the eccentric” out of an abundance of caution. While usually well-intentioned, that sort of status quo thinking holds back new and better was of doing old things better, or doing all new things. The end result is that real health and safety advances are ignored or forgone.

How Status Quo Thinking at the FAA Results in Less Safety

This is equally true for air safety and FAA regulation of drones. “Ultimately, the status quo is seen as safe,” the National Acadamies report notes. “There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks.” The example of the life-saving potential of drones have already been well-documented.

Drones have already been used to monitor fires, help with search-and-rescue missions for missing people or animals, assist life guards by dropping life vests to drowning people, deliver medicines to remote areas, and help with disaster monitoring and recovery efforts. But that really just scratches the surface in terms of their potential.

Some people scoff at the idea of drones being used to deliver small packages to our offices or homes. But consider how many of those packages are delivered by human-operated vehicles that are far more likely to be involved in dangerous traffic accidents on our over-crowded roadways. If drones were used to make some of those deliveries, we might be able to save a lot of lives. Or how about an elderly person stuck at home during storm, only to realize they are out of some essential good or medicine that is a long drive away. Are we better off having them (or someone else) get behind the wheel to drive and get it, or might a drone be able to deliver it more safely?

The authors of the National Academies report understand this, as they made clear when they concluded that, “operation of UAS has many advantages and may improve the quality of life for people around the world. Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (Ch. 3, p. 5-6)

Reform Ideas: Use the “Innovator’s Presumption” & “Sunsetting Imperative”

Given that reality, the National Academies report makes several sensible reform recommendations aimed at countering the FAA’s hyper-conservatism and bias for the broken regulatory status quo. I won’t go through them all, but I think they are an excellent set of reforms that deserve to be taken seriously.

I do, however, want to highly recommend everyone take a close look at this one outstanding recommendation in Chapter 3, which is aimed at keep things moving and making sure that status quo thinking doesn’t freeze beneficial new forms of airspace innovation. Specifically, the National Academies report recommends that:

The FAA should meet requests for certifications or operations approvals with an initial response of “How can we approve this?” Where the FAA employs internal boards of executives throughout the agency to provide input on decisions, final responsibility and authority and accountability for the decision should rest with the executive overseeing such boards. A time limit should be placed on responses from each member of the board, and any “No” vote should be accompanied with a clearly articulated rationale and suggestion for how that “No” vote could be made a “Yes.” (Ch. 3, p. 8)

I absolutely love this reform idea because it essentially combines elements of two general innovation policy reform ideas that I discussed in my recent essay, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” In that piece, I proposed the idea of instituting an “Innovator’s Presumption” that would read: “Any person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.” I also proposed a so-called “Sunsetting Imperative” that would read: “Any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.”

The National Academies report recommendation above basically embodies the spirit of both the Innovator’s Presumption and the Sunsetting Imperative. It puts the burden of proof on opponents of change and then creates a sort of shot clock to keep things moving.

These are the kind of reforms we need to make sure status quo thinking at regulatory agencies doesn’t hold back life-enriching and life-saving innovations. It’s time for a change in the ways business is done at the FAA to make sure that regulations are timely, effective, and in line with common sense. Sadly, as the new National Academies report makes clear, today’s illogical policies governing airspace innovation are having counter-productive results that hurt society.

The Internet is a great tool for women’s empowerment, because it gives us the freedom to better our lives in ways that previously far more limited. Today, the FCC’s Restoring Internet Freedom Order helped the Internet become even freer.

There is a lot of misinformation and scare tactics about the previous administration’s so-called “net neutrality” rules. But the Obama-era Open Internet Order regulations were not neutral at all. Rather, they ham-handedly forced Internet Service Providers (ISPs) into a Depression-era regulatory classification known as a Title II common carrier. This would have slowed Internet dynamism, and with it, opportunities for women.

Today’s deregulatory move by the FCC reverses that decision, which will allow more ISPs to enter the market. More players in the market make Internet service better, faster, cheaper, and more wildly available. This is especially good for women who have especially benefited from the increased connectivity and flexibility that the Internet has provided.

Continue reading →

Last week the U.S. Court of Appeals for the 11th Circuit vacated a Federal Trade Commission order requiring medical diagnostic company LabMD to adopt reasonable data security, handing the FTC a loss on an important data security case.  In some ways, this outcome is not surprising.  This was a close case with a tenacious defendant which raised important questions about FTC authority, how to interpret “unfairness” under the FTC Act, and the Commission’s data security program.

Unfortunately, the decision answers none of those important questions and makes a total hash of the FTC’s current unfairness law. While some critics of the FTC’s data security program may be pleased with the outcome of this decision, they ought to be concerned with its reasoning, which harkens back to the “public policy” test for unfairness that was greatly abused by the FTC in the 1970’s.

The most problematic parts of this decision are likely dicta, but it is still worth describing how sharply this decision conflicts with the FTC’s modern unfairness test.  The court’s reasoning could implicate not only the FTC’s data security authority but its overall authority to police unfair practices of any kind.

(I’m going to skip the facts and procedural background of the case because the key issues are matters of law unrelated to the facts of the case. The relevant facts and procedure are laid out in the decision’s first and most lucid section. I’m also going to limit this piece to the decision’s unfairness analysis. There’s more to say about the court’s conclusion that the FTC’s order is unenforceable, but this post is already long. Interesting takes here and here.)

In short, the court’s decision attempts to rewrite a quarter century of FTC unfairness law.  By doing so, it elevates a branch of unfairness analysis that, in the 1970s, landed the FTC in big trouble.  First, I’ll summarize the current unfairness test as stated in the FTC Act. Next, I’ll discuss the previous unfairness test, the trouble it caused, and how that resulted in the modern test. Finally, I’ll look at how the LabMD decision rejects the modern test and discuss some implications.

The Modern Unfairness Test

If you’ve read a FTC complaint with an unfairness count in the last two decades, you’re probably familiar with the modern unfairness test.  A practice is unfair if it causes substantial injury that the consumer cannot avoid, and which is not outweighed by benefits to consumers or competition.  In 1994, Congress codified this three-part test in Section 5(n) of the FTC Act, which reads in full:

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice [1] causes or is likely to cause substantial injury to consumers which [2] is not reasonably avoidable by consumers themselves and [3] not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination. [Emphasis added]

The text of Section 5(n) makes two things clear: 1) a practice is not unfair unless it meets the three-part consumer injury test and 2) public policy considerations can be helpful evidence of unfairness but are not sufficient or even necessary to demonstrate it. Thus, the three-part consumer injury test is centrally important to the unfairness analysis. Indeed, the three-part consumer injury test set out in Section 5(n) has been synonymous with the unfairness test for decades.

The Previous, Problematic Test for Unfairness

But the unfairness test used to be quite different.  In outlining the test’s history, I am going to borrow heavily from Howard Beales’ excellent 2003 essay, “The FTCs Use of Unfairness Authority: Its Rise, Fall, and Resurrection.” (Beales was the Director of the FTC’s Bureau of Consumer Protection under Republican FTC Chairman Timothy Muris.) Beales describes the previous test for unfairness:

In 1964 … the Commission set forth a test for determining whether an act or practice is “unfair”: 1) whether the practice “offends public policy” – as set forth in “statutes, the common law, or otherwise”; 2) “whether it is immoral, unethical, oppressive, or unscrupulous; 3) whether it causes substantial injury to consumers (or competitors or other businessmen).” …. [T]he Supreme Court, while reversing the Commission in Sperry & Hutchinson cited the Cigarette Rule unfairness criteria with apparent approval….

This three-part test – public policy, immorality, and/or substantial injury – gave the agency enormous discretion, and the FTC began to wield that discretion in a problematic manner. Beales describes the effect of the S&H dicta:

Emboldened by the Supreme Court’s dicta, the Commission set forth to test the limits of the unfairness doctrine. Unfortunately, the Court gave no guidance to the Commission on how to weigh the three prongs – even suggesting that the test could properly be read disjunctively.

The result was a series of rulemakings relying upon broad, newly found theories of unfairness that often had no empirical basis, could be based entirely upon the individual Commissioner’s personal values, and did not have to consider the ultimate costs to consumers of foregoing their ability to choose freely in the marketplace. Predictably, there were many absurd and harmful results.

According to Beales, “[t]he most problematic proposals relied heavily on ‘public policy’ with little or no consideration of consumer injury.”  This regulatory overreach triggered a major backlash from businesses, Congress, and the media. The Washington Post called the FTC the “National Nanny.” Congress even defunded the agency for a time.

The backlash prompted the agency to revisit the S&H criteria.  As Beales describes,

As the Commission struggled with the proper standard for unfairness, it moved away from public policy and towards consumer injury, and consumer sovereignty, as the appropriate focus…. On December 17, 1980, a unanimous Commission formally adopted the Unfairness Policy Statement, and declared that “[un]justified consumer injury is the primary focus of the FTC Act, and the most important of the three S&H criteria.”

This Unfairness Statement recast the relationship between the three S&H criteria, discarding the “immoral” prong entirely and elevating consumer injury above public policy: “Unjustified consumer injury is the primary focus of the FTC Act, and the most important of the three S&H criteria. By itself it can be sufficient to warrant a finding of unfairness.” [emphasis added]  It was this Statement that first established the three-part consumer injury test now codified in Section 5(n).

Most importantly for our purposes, the statement explained the optional nature of the S&H “public policy” factor. As Beales details,

[I]n most instances, the proper role of public policy is as evidence to be considered in determining the balance of costs and benefits”  although ”public policy can ‘independently support a Commission action . . . when the policy is so clear that it will entirely determine the question of consumer injury, so there is little need for separate analysis by the Commission.’” [emphasis added]

In a 1982 letter to Congress, the Commission reiterated that public policy “is not a necessary element of the definition of unfairness.”

As the 1980s progressed, the Unfairness Policy statement, specifically the three-part test for consumer injury, “became accepted as the appropriate test for determining unfairness…” But not all was settled.  Beales again:

The danger of unfettered “public policy” analysis as an independent basis for unfairness still existed, however [because] the Unfairness Policy Statement itself continued to hold out the possibility of public policy as the sole basis for a finding of unfairness. A less cautious Commission might ignore the lessons of history, and dust off public policy-based unfairness. … When Congress eventually reauthorized the FTC in 1994, it codified the three-part consumer injury unfairness test. It also codified the limited role of public policy. Under the statutory standard, the Commission may consider public policies, but it cannot use public policy as an independent basis for finding unfairness. The Commission’s long and dangerous flirtation with ill-defined public policy as a basis for independent action was over.

Flirting with Public Policy, Again

To sum up, chastened for overreaching its authority using the public policy prong of the S&H criteria, the FTC refocused its unfairness authority on consumer injury.  Congress ratified that refocus in Section 5(n) of the FTC Act, as I’ve discussed above. Today, under modern unfairness law, FTC complaints rarely make public policy arguments and only then to bolster evidence of consumer injury.

In last week’s LabMD decision, the 11th Circuit rejects this long-standing approach to unfairness. Consider these excerpts from its decision:

“The Commission must find the standards of unfairness it enforces in ‘clear and well-established’ policies that are expressed in the Constitution, statutes, or the common law.”

“An act or practice’s ‘unfairness’ must be grounded in statute, judicial decisions – i.e., the common law – or the Constitution. An act or practice that causes substantial injury but lacks such grounding is not unfair within Section 5(a)’s meaning.”

“Thus, an ‘unfair’ act or practice is one which meets the consumer-injury factors listed above and is grounded in well-established legal policy.”

And consider this especially salty bite of pretzel logic based on a selective citation of the FTC Act:

“Section 5(n) now states, with regard to public policy, ‘In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.’  We do not take this ambiguous statement to mean that the Commission may bring suit purely on the basis of substantial consumer injury. The act or practice alleged to have caused injury must still be unfair under a well-established legal standard, whether grounded in statute, the common law, or the Constitution.” [emphasis added]

Yet those two sentences in 5(n) are quite clear when read in context with the full paragraph, which requires the three-part consumer injury test but merely permits the FTC to consider public policies as evidence.  The court’s interpretation here is also undercut by the FTC’s historic misuse of public policy and Congress’s subsequent intent in Section 5(n) to limit the FTC overreach by restricting use of public policy evidence. Congress sought to restrict the FTC’s use of public policy; the 11th Circuit’s decision seeks to require it.

To be fair, the court is not exactly returning to the wild pre-Unfairness Statement days when the FTC thought public policy alone was sufficient to find an act or practice unfair.  Instead, the court has developed a new, stricter test for unfairness that requires both consumer injury and offense to public policy.

After crafting this bespoke unfairness test by inserting a mandatory public policy element, the decision then criticizes the FTC complaint for “not explicitly” citing the public policy source for its “standard of unfairness.”  But it is obvious why the FTC didn’t include a public policy element in the complaint – no one has thought it necessary, for more than two decades.  (Note, however, that the Commission’s decision does cite numerous statutes and common law principles as public policy evidence of consumer injury in this case.)

The court supplies the missing public policy element for the FTC: “It is apparent to us, though, that the source is the common law of negligence.” The court then determines that “the Commission’s action implies” that common law negligence “is a source that provides standards for determining whether an act or practice is unfair….”

Having thus rewritten the Commission’s argument and decades of FTC law, the court again surprises. Rather than analyze LabMD’s liability under this new standard, the court “assumes arguendo that the Commission is correct and that LabMD’s negligent failure to design and maintain a reasonable data security program invaded consumers’ right of privacy and thus constituted an unfair act or practice.”

Thus, the court does not actually rely on the unfairness test it has set out, arguably rendering that entire analysis dicta.

Why Dicta?

What is going on here? I believe the court is suggesting how data security cases ought to be pled, even though it cannot require this standard under Section 5(n) – and perhaps would not want to, given the collateral effect on other types of unfairness cases.

The court clearly wanted to signal something through this exercise.  Otherwise, it would have been much easier to have assumed arguendo LabMD’s liability under the existing three prong consumer injury unfairness test contained in the FTC’s complaint.  Instead, the court constructs a new unfairness test, interprets the FTC’s complaint to match it, and then appears to render its unfairness analysis dicta.

So, what exactly is the court signaling? This new unfairness test is stricter than the Section 5(n) definition of unfairness, and thus any complaint that satisfies the LabMD test would also satisfy the statutory test.  Thus, perhaps the court seeks to encourage the FTC to plead data security complaints more strictly than legally necessary by including references to public policy.

Had the court applied its bespoke standard to find that LabMD was not liable, I think the FTC would have had no choice but to appeal the decision.  By upsetting 20+ years of unfairness law, the court’s analysis would have affected far more than just the FTC’s data security program.  The FTC brings many non-data security cases under its unfairness authority, including illegal bill cramming and unauthorized payment processing and other types of fraud where deception cannot adequately address the problem. The new LabMD unfairness test would affect many such areas of FTC enforcement. But by assuming arguendo LabMD’s liability, the court may have avoided such effects and thus reduced the FTC’s incentive to appeal on these grounds.

Dicta or not, appeal or not, the LabMD decision has elevated unfairness’s “public policy” factor. Given the FTC’s misuse of that factor in the past, FTC watchers ought to keep an eye out.

—-

Last week’s LabMD decision will shape the constantly evolving data security policy environment.  At the Charles Koch Institute, we believe that a healthy data security policy environment will encourage permissionless innovation while addressing real consumer harms as they arise.  More broadly, we believe that innovation and technological progress are necessary to achieve widespread human flourishing.  And we seek to foster innovation-promoting environments through educational programs and academic grant-making.

Lawmakers frequently hear impressive-sounding stats about net neutrality like “83% of voters support keeping FCC’s net neutrality rules.” This 83% number (and similar “75% of Republicans support the rules”) is based on a survey from the Program for Public Consultation released in December 2017, right before the FCC voted to repeal the 2015 Internet regulations.

These numbers should be treated with skepticism. This survey generates these high approval numbers by asking about net neutrality “rules” found nowhere in the 2015 Open Internet Order. The released survey does not ask about the substance of the Order, like the Title II classification, government price controls online, or the FCC’s newly-created authority to approve of and disapprove of new Internet services.

Here’s how the survey frames the issue:

Under the current regulations, ISPs are required to:   

provide customers access to all websites on the internet.   

provide equal access to all websites without giving any websites faster or slower download speeds.  

The survey then essentially asks the participant if they favor these “regulations.” The nearly 400-page Order is long and complex and I’m guessing the survey creators lacked expertise in this area because this is a serious misinterpretation of the Order. This framing is how net neutrality advocates discuss the issue, but the Obama FCC’s interpretations of the 2015 Order look nothing like these survey questions. Exaggeration and misinformation is common when discussing net neutrality and unfortunately these pollsters contributed to it. (The Washington Post Fact Checker column recently assigned “Three Pinocchios” to similar net neutrality advocate claims.)

Let’s break down these rules ostensibly found in the 2015 Order.

“ISPs are required to provide customers access to all websites on the internet”

This is wrong. The Obama FCC was quite clear in the 2015 Order and during litigation that ISPs are free to filter the Internet and block websites. From the oral arguments:

FCC lawyer: “If [ISPs] want to curate the Internet…that would drop them out of the definition of Broadband Internet Access Service.”
Judge Williams: “They have that option under the Order?”
FCC lawyer: “Absolutely, your Honor. …If they filter the Internet and don’t provide access to all or substantially all endpoints, then…the rules don’t apply to them.”

As a result, the judges who upheld the Order said, “The Order…specifies that an ISP remains ‘free to offer ‘edited’ services’ without becoming subject to the rule’s requirements.”

Further, in the 1996 Telecom Act, Congress gave Internet access providers legal protection in order to encourage them to block lewd and “objectionable content.” Today, many ISPs offer family-friendly Internet access that blocks, say, pornographic and violent content. An FCC Order cannot and did not rewrite the Telecom Act and cannot require “access to all websites on the internet.”

“ISPs are required to provide equal access to all websites without giving any websites faster or slower download speeds”

Again, wrong. There is no “equal access to all websites” mandate (see above). Further, the 2015 Order allows ISPs to prioritize certain Internet traffic because preventing prioritization online would break Internet services.

This myth–that net neutrality rules require ISPs to be dumb pipes, treating all bits the same–has been circulated for years but is derided by networks experts. MIT computer scientist and early Internet developer David Clark colorfully dismissed this idea as “happy little bunny rabbit dreams.” He pointed out that prioritization has been built into Internet protocols for years and “[t]he network is not neutral and never has been.” 

Other experts, such as tech entrepreneur and investor Mark Cuban and President Obama’s former chief technology officer Aneesh Chopra, have observed that the need for Internet “fast lanes” as Internet services grow more diverse. Further, the nature of interconnection agreements and content delivery networks mean that some websites pay for and receive better service than others.

This is not to say the Order is toothless. It authorizes government price controls and invents a vague “general conduct standard” that gives the agency broad authority to reject, favor, and restrict new Internet services. The survey, however, declined to ask members of the public about the substance of the 2015 rules and instead asked about support for net neutrality slogans that have only a tenuous relationship with the actual rules.

“Net neutrality” has always been about giving the FCC, the US media regulator, vast authority to regulate the Internet. In doing so, the 2015 Order rejects the 20-year policy of the United States, codified in law, that the Internet and Internet services should be “unfettered by Federal or State regulation.” The US tech and telecom sector thrived before 2015 and the 2017 repeal of the 2015 rules will reinstate, fortunately, that light-touch regulatory regime.

Mobile broadband is a tough business in the US. There are four national carriers–Verizon, AT&T, T-Mobile, and Sprint–but since about 2011, mergers have been contemplated (and attempted, but blocked). Recently, the competition has gotten fiercer. The higher data buckets and unlimited data plans have been great for consumers.

The FCC’s latest mobile competition report, citing UBS data, says that industry ARPU (basically, monthly revenue per subscriber), which had been pretty stable since 1998, declined significantly from 2013 to 2016 from about $46 to about $36. These revenue pressures seemed to fall hardest on Sprint, who in February, issued $1.5 billion of “junk bonds” to help fund its network investments. Analysts pointed out in 2016 that “Sprint has not reported full-year net profits since 2006.” Further, mobile TV watching is becoming a bigger business. AT&T and Verizon both plan to offer a TV bundle to their wireless customers this year, and T-Mobile’s purchase of Layer3 indicates an interest in offering a mobile TV service.

It’s these trends that probably pushed T-Mobile and Sprint to announce yesterday their intention to merge. All eyes will be on the DOJ and the FCC as their competition divisions consider whether to approve the merger.

The Core Arguments

Merger opponents’ primary argument is what’s been raised several times since the 2011 AT&T-T-Mobile aborted merger: this “4 to 3” merger significantly raises the prospect of “tacit collusion.” After the merger, the story goes, the 3 remaining mobile carriers won’t work as hard to lower prices or improve services. While outright collusion on prices is illegal, they have a point that tacit collusion is more difficult for regulators to prove, to prevent, and to prosecute.

The counterargument, that T-Mobile and Sprint are already making, is that “mobile” is not a distinct market anymore–technologies and services are converging. Therefore, tacit collusion won’t be feasible because mobile broadband is increasingly competing with landline broadband providers (like Comcast and Charter), and possibly even media companies (like Netflix and Disney). Further, they claim, T-Mobile and Sprint going it alone will each struggle to deploy a capex-intensive 5G network that can compete with AT&T, Verizon, Comcast-NBCU, and the rest, but the merged company will be a formidable competitor in TV and in consumer and enterprise broadband.

Competitive Review

Any prediction about whether the deal will be approved or denied is premature. This is a horizontal merger in a highly-visible industry and it will receive an intense antitrust review. (Rachel Barkow and Peter Huber have an informative 2001 law journal article about telecom mergers at the DOJ and FCC.) The DOJ and FCC will seek years of emails and financial records from Sprint and T-Mobile executives and attempt to ascertain the “real” motivation for the merger and its likely consumer effects.

T-Mobile and Sprint will likely lean on evidence that consumers view (or soon will view) mobile broadband and TV as a substitute for landline broadband and TV. Much like phone and TV went from “local markets with one or two competitors” years ago to a “national market with several competitors,” their story seems to be, broadband is following a similar trajectory and viewing this as a 4 to 3 merger misreads industry trends.

There’s preliminary evidence that mobile broadband will put competitive pressure on conventional, landline broadband. Census surveys indicate that in 2013, 10% of Internet-using households were mobile Internet only (no landline Internet). By 2015, about 20% of households were mobile-only, and the proportion of Internet users who had landline broadband actually fell from 82% to 75%. But this is still preliminary and I haven’t seen economic evidence yet that mobile is putting pricing pressure on landline TV and broadband.

FCC Review

Antitrust review is only one step, however. The FCC transaction review process is typically longer and harder to predict. The FCC has concurrent authority with the DOJ under the Clayton Act to review telecommunications mergers under Sections 7 and 11 of the Clayton Act but it has never used that authority. Instead, the FCC uses its spectrum transfer review authority as a hook to evaluate mergers using the Communication Act’s (vague) “public interest standard.” Unlike antitrust standards, which generally put the burden on regulators to show consumer and competitive harm, the public interest standard as currently interpreted puts the burden on merging companies to show social and competitive benefits.

Hopefully the FCC will hew to a more rigorous antitrust inquiry and reform the open-ended public interest inquiry. As Chris Koopman and I wrote for the law journal a few years ago, these FCC  “public interest” reviews are sometimes excessively long and advocates use the vague standards to force the FCC into ancillary concerns, like TV programming decisions and “net neutrality” compliance.

Part of the public interest inquiry is a complex “spectrum screen” analysis. Basically, transacting companies can’t have too much “good” spectrum in a single regional market. I doubt the spectrum screen analysis would be dispositive (much of the analysis in the past seemed pretty ad hoc), but I do wonder if it will be an issue since this was a major issue raised in the AT&T-T-Mobile attempted merger.

In any case, that’s where I see the core issues, though we’ll learn much more as the merger reviews commence.

On March 19th, I had the chance to debate Franklin Foer at a Patrick Henry College event focused on the question, “Is Big Tech Big Brother?” It was billed as a debate over the role of technology in American society and whether government should be regulating media and technology platforms more generally.  [The full event video is here.] Foer is the author of the new book, World Without Mind: The Existential Threat of Big Tech, in which he advocates a fairly expansive regulatory regime for modern information technology platforms. He is open to building on regulatory ideas from the past, including broadcast-esque licensing regimes, “Fairness Doctrine”-like mandates for digital intermediaries, “fiduciary” responsibilities, beefed-up antitrust intervention, and other types of controls. In a review of the book for Reason, and then again during the debate at Patrick Henry University, I offered some reflections on what we can learn from history about how well ideas like those worked out in practice.

My closing statement of the debate, which lasted just a little over three minutes, offers a concise summation of what that history teaches us and why it would be so dangerous to repeat the mistakes of the past by wandering down that disastrous path again. That 3-minute clip is posted below. (The audience was polled before and after the event and asked the same question each time: “Do large tech companies wield too much power in our economy, media and personal lives and if so, should government(s) intervene?” Apparently at the beginning, the poll was roughly Yes – 70% and No – 30%, but after the debated ended it has reversed, with only 30% in favor of intervention and 70% against. Glad to turn around some minds on this one!)

via ytCropper

Image result for Zuckerberg Schmidt laughing

Two weeks ago, as Facebook CEO Mark Zuckerberg was getting grilled by Congress during a two-day media circus set of hearings, I wrote a counterintuitive essay about how it could end up being Facebook’s greatest moment. How could that be? As I argued in the piece, with an avalanche of new rules looming, “Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.”

With the exception of probably only Google, no firm other than Facebook likely has enough lawyers, lobbyists, and money to deal with layers of red tape and corresponding regulatory compliance headaches that lie ahead. That’s true both here and especially abroad in Europe, which continues to pile on new privacy and “data protection” regulations. While such rules come wrapped in the very best of intentions, there’s just no getting around the fact that regulation has costs. In this case, the unintended consequence of well-intentioned data privacy rules is that the emerging regulatory regime will likely discourage (or potentially even destroy) the chances of getting the new types of innovation and competition that we so desperately need right now.

Others now appear to be coming around to this view. On April 23, both the New York Times and The Wall Street Journal ran feature articles with remarkably similar titles and themes. The New York Times article by Daisuke Wakabayashi and Adam Satariano was titled, “How Looming Privacy Regulations May Strengthen Facebook and Google,” and The Wall Street Journal’s piece, “Google and Facebook Likely to Benefit From Europe’s Privacy Crackdown,” was penned by Sam Schechner and Nick Kostov.

“In Europe and the United States, the conventional wisdom is that regulation is needed to force Silicon Valley’s digital giants to respect people’s online privacy. But new rules may instead serve to strengthen Facebook’s and Google’s hegemony and extend their lead on the internet,” note Wakabayashi and Satariano in the NYT essay. They continue on to note how “past attempts at privacy regulation have done little to mitigate the power of tech firms.” This includes regulations like Europe’s “right to be forgotten” requirement, which has essentially put Google in a privileged position as the “chief arbiter of what information is kept online in Europe.”
Continue reading →

The recently enacted Stop Enabling Sex Trafficking Act (SESTA) has many problems including that it doesn’t achieve its stated purpose of stopping sex trafficking. It contains a retroactivity clause that appears facially unconstitutional, but this provision would likely be severable by courts if used as the sole basis of a legal challenge. Perhaps more concerning are the potential First Amendment violations of the law.

These concerns go far beyond the rights of websites as speakers, but to the individual users’ content generation. Promoting sex trafficking is already a crime and a lawful restraint on speech. Websites, however, have acted broadly and quickly due to concerns of their new liability under the law and as a result lawful speech has also been stifled.

Given the controversial nature of the law it seems likely that a legal challenge is forthcoming. Here are three ideas about what a First Amendment challenge to the law might look like.

Continue reading →