Over at Discourse magazine this week, my R Street colleague Jonathan Cannon and I have posted a new essay on how it has been “Quite a Fall for Digital Tech.” We mean that both in the sense that the last few months have witnessed serious market turmoil for some of America’s leading tech companies, but also that the political situation for digital tech more generally has become perilous. Plenty of people on the Left and the Right now want a pound of flesh from the info-tech sector, and the starting cut at the body involves Section 230, the 1996 law that shields digital platforms from liability for content posted by third parties.

With the Supreme Court recently announcing it will hear Gonzalez v. Google, a case that could significantly narrow the scope of Section 230, the stakes have grown higher. It was already the case that federal and state lawmakers were looking to chip away at Sec. 230’s protections through an endless variety of regulatory measures. But if the Court guts Sec. 230 in Gonzalez, then it will really be open season on tech companies, as lawsuits will fly at every juncture whenever someone does not like a particular content moderation decision. Cannon and I note in our new essay that, Continue reading →

In cleaning up my desk this weekend, I chanced upon an old notebook and like many times before I began to transcribe the notes. It was short, so I got to the end within a couple of minutes. The last page was scribbled with the German term Öffentlichkeit (public sphere), a couple sentences on Hannah Arendt, and a paragraph about Norberto Bobbio’s view of public and private.

Then I remembered. Yep. This is the missing notebook from a class on democracy in the digital age.   

Serendipitously, a couple of hours later, William Freeland alerted me to Franklin Foer’s newest piece in The Atlantic titled “The Death of the Public Square.” Foer is the author of “World Without Mind: The Existential Threat of Big Tech,” and if you want a good take on that book, check out Adam Thierer’s review in Reason.

Much like the book, this Atlantic piece wades into techno ruin porn but focuses instead on the public sphere: Continue reading →

Image result for Zuckerberg Schmidt laughing

Two weeks ago, as Facebook CEO Mark Zuckerberg was getting grilled by Congress during a two-day media circus set of hearings, I wrote a counterintuitive essay about how it could end up being Facebook’s greatest moment. How could that be? As I argued in the piece, with an avalanche of new rules looming, “Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.”

With the exception of probably only Google, no firm other than Facebook likely has enough lawyers, lobbyists, and money to deal with layers of red tape and corresponding regulatory compliance headaches that lie ahead. That’s true both here and especially abroad in Europe, which continues to pile on new privacy and “data protection” regulations. While such rules come wrapped in the very best of intentions, there’s just no getting around the fact that regulation has costs. In this case, the unintended consequence of well-intentioned data privacy rules is that the emerging regulatory regime will likely discourage (or potentially even destroy) the chances of getting the new types of innovation and competition that we so desperately need right now.

Others now appear to be coming around to this view. On April 23, both the New York Times and The Wall Street Journal ran feature articles with remarkably similar titles and themes. The New York Times article by Daisuke Wakabayashi and Adam Satariano was titled, “How Looming Privacy Regulations May Strengthen Facebook and Google,” and The Wall Street Journal’s piece, “Google and Facebook Likely to Benefit From Europe’s Privacy Crackdown,” was penned by Sam Schechner and Nick Kostov.

“In Europe and the United States, the conventional wisdom is that regulation is needed to force Silicon Valley’s digital giants to respect people’s online privacy. But new rules may instead serve to strengthen Facebook’s and Google’s hegemony and extend their lead on the internet,” note Wakabayashi and Satariano in the NYT essay. They continue on to note how “past attempts at privacy regulation have done little to mitigate the power of tech firms.” This includes regulations like Europe’s “right to be forgotten” requirement, which has essentially put Google in a privileged position as the “chief arbiter of what information is kept online in Europe.”
Continue reading →

Google’s announcement this week of plans to expand to dozens of more cities got me thinking about the broadband market and some parallels to transportation markets. Taxi cab and broadband companies are seeing business plans undermined with the emergence of nimble Silicon Valley firms–Uber and Google Fiber, respectively.

The incumbent operators in both cases were subject to costly regulatory obligations in the past but in return they were given some protection from competitors. The taxi medallion system and local cable franchise requirements made new entry difficult. Uber and Google have managed to break into the market through popular innovations, the persistence to work with local regulators, and motivated supporters. Now, in both industries, localities are considering forbearing from regulations and welcoming a competitor that poses an economic threat to the existing operators.

Notably, Google Fiber will not be subject to the extensive build-out requirements imposed on cable companies who typically built their networks according to local franchise agreements in the 1970s and 1980s. Google, in contrast, generally does substantial market research to see if there is an adequate uptake rate among households in particular areas. Neighborhoods that have sufficient interest in Google Fiber become Fiberhoods.

Similarly, companies like Uber and Lyft are exempted from many of the regulations governing taxis. Taxi rates are regulated and drivers have little discretion in deciding who to transport, for instance. Uber and Lyft drivers, in contrast, are not price-regulated and can allow rates to rise and fall with demand. Further, Uber and Lyft have a two-way rating system: drivers rate passengers and passengers rate drivers via smartphone apps. This innovation lowers costs and improves safety: the rider who throws up in cars after bar-hopping, who verbally or physically abuses drivers (one Chicago cab driver told me he was held up at gunpoint several times per year), or who is constantly late will eventually have a hard time hailing an Uber or Lyft. The ratings system naturally forces out expensive riders (and ill-tempered drivers).

Interestingly, support and opposition for Uber and Google Fiber cuts across partisan lines (and across households–my wife, after hearing my argument, is not as sanguine about these upstarts). Because these companies upset long-held expectations, express or implied, strong opposition remains. Nevertheless, states and localities should welcome the rapid expansion of both Uber and Google Fiber.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions. Of course, there is a decades-long debate about when deregulation turns into subsidies, and this conversation applies to Uber and Google Fiber.

That debate is important, but regulators and policymakers should take every chance to roll back the rules of the past–not layer on more mandates in an ill-conceived attempt to “level the playing field.” Transportation and broadband markets are changing for the better with more competition and localities should generally stand aside.

I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.

While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.

Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.

The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company. Continue reading →

After more than a year of complaining about Google and being met with responses from me (see also here, here, here, here, and here, among others) and many others that these complaints have yet to offer up a rigorous theory of antitrust injury — let alone any evidence — FairSearch yesterday offered up its preferred remedies aimed at addressing, in its own words, “the fundamental conflict of interest driving Google’s incentive and ability to engage in anti-competitive conduct. . . . [by putting an] end [to] Google’s preferencing of its own products ahead of natural search results.”  Nothing in the post addresses the weakness of the organization’s underlying claims, and its proposed remedies would be damaging to consumers.

FairSearch’s first and core “abuse” is “[d]iscriminatory treatment favoring Google’s own vertical products in a manner that may harm competing vertical products.”  To address this it proposes prohibiting Google from preferencing its own content in search results and suggests as additional, “structural remedies” “[r]equiring Google to license data” and “[r]equiring Google to divest its vertical products that have benefited from Google’s abuses.”

Tom Barnett, former AAG for antitrust, counsel to FairSearch member Expedia, and FairSearch’s de facto spokesman should be ashamed to be associated with claims and proposals like these.  He better than many others knows that harm to competitors is not the issue under US antitrust laws.  Rather, US antitrust law requires a demonstration that consumers — not just rivals — will be harmed by a challenged practice.  He also knows (as economists have known for a long time) that favoring one’s own content — i.e., “vertically integrating” to produce both inputs as well as finished products — is generally procompetitive. Continue reading →

In a [recent post](http://www.forbes.com/sites/timothylee/2012/09/08/the-weird-economics-of-utility-networks/), Tim Lee does a good job of explaining why facilities-based competition in broadband is difficult. He writes,

>As Verizon is discovering with its FiOS project, it’s much harder to turn a profit installing the second local loop; both because fewer than 50 percent of customers are likely to take the service, and because competition pushes down margins. And it’s almost impossible to turn a profit providing a third local loop, because fewer than a third of customers are likely to sign up, and even more competition means even thinner margins.

Tim thus concludes that

>the kind of “facilities-based” competition we’re seeing in Kansas City, in which companies build redundant networks that will sit idle most of the time, is extremely wasteful. In a market where every household has n broadband options (each with its own fiber network), only 1/n local loops will be in use at any given time. The larger n is, the more resources are wasted on redundant infrastructure.

I don’t understand that conclusion. You would imagine that redundant infrastructure would be built only if it is profitable to its builder. Tim is right we probably should not expect more than a few competitors, but I don’t see how more than one pipe is necessarily wasteful. If laying down a second set of pipes is profitable, shouldn’t we welcome the competition? The question is whether that second pipe is profitable without government subsidy.

That brings me to a larger point: I think what Tim is missing is what makes Google Fiber so unique. Tim is assuming that all competitors in broadband will make their profits from the subscription fees they collect from subscribers. As we all know, that’s not [how Google tends to operate](http://elidourado.com/blog/theory-of-google/). Google’s primary business model is advertising, and that’s likely from [where they expect their return to come](http://community.nasdaq.com/News/2012-08/google-seeking-more-ad-impressions-with-fast-fiber.aspx?storyid=162788). One of Google Fiber’s price points is [free](http://www.techdirt.com/blog/innovation/articles/20120726/11200919842/google-fiber-is-official-free-broadband-up-to-5-mbps-pay-symmetrical-1-gbps.shtml), so we might expect greater adoption of the service. That’s disruptive innovation that could sustainably increase competition and bring down prices for consumers–without a government subsidy.

Kansas City sadly gave Google all sorts of subsidies, like free power and rackspace for its servers as [Tim has pointed out](http://arstechnica.com/tech-policy/2012/09/how-kansas-city-taxpayers-support-google-fiber/), but it also cut serious red tape. For example, there is no build-out requirement for Google Fiber, a fact [now bemoaned](http://www.wired.com/business/2012/09/google-fiber-digital-divide/) by digital divide activists. Such requirements, I would argue, are the [true cause](http://news.cnet.com/How-to-squelch-growth-of-the-high-speed-Net/2010-1034_3-6106690.html) of the unused and wasteful overbuilding that Tim laments.

So what matters more? The in-kind subsidies or the freedom to build only where it’s profitable? I think that’s the empirical question we’re really arguing about. It’s not a forgone conclusion of broadband economics that [there can be only one](http://www.youtube.com/watch?v=4AoOa-Fz2kw). And do we want to limit competition in part of a municipality in order to achieve equity for the whole? That’s another question over which “original recipe” and bleeding-heart libertarians may have a difference of opinion.

Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy.

Executive Summary

For three years now the Obama Administration and the Federal Communications Commission (FCC) have been pushing for national broadband connectivity as a way to strengthen our economy, spur innovation, and create new jobs across the country. They know that America requires more private investment to achieve their vision. But, despite their good intentions, their policies haven’t encouraged substantial private investment in communications infrastructure. That’s why the launch of Google Fiber is so critical to policymakers who are seeking to promote investment in next generation networks.

The Google Fiber deployment offers policymakers a rare opportunity to examine policies that successfully spurred new investment in America’s broadband infrastructure. Google’s intent was to “learn how to bring faster and better broadband access to more people.” Over the two years it planned, developed, and built its ultra high-speed fiber network, Google learned a number of valuable lessons for broadband deployment – lessons that policymakers can apply across America to meet our national broadband goals.

To my surprise, however, the policy response to the Google Fiber launch has been tepid. After reviewing Google’s deployment plans, I expected to hear the usual chorus of Rage Against the ISP from Public KnowledgeFree Press, and others from the left-of-center, so-called “public interest” community (PIC) who seek regulation of the Internet as a public utility. Instead, they responded to the launch with deafening silence.

Maybe they were stunned into silence. Google’s deployment is a real-world rejection of the public interest community’s regulatory agenda more powerful than any hypothetical. Google is building fiber in Kansas City because its officials were willing to waive regulatory barriers to entry that have discouraged broadband deployments in other cities. Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy. Continue reading →

On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.

Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan.  Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .

Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals.  Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks.  And then go build a lot more of them.

Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year.  (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)

US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption.  It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.

I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem:  the U.S. is nearing a dangerous stalemate in its communications infrastructure.  We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband.  Right now, ultra high-speed broadband is technically possible by running fiber to the home.  Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.

Continue reading →

Six months may not seem a great deal of time in the general business world, but in the Internet space it’s a lifetime as new websites, tools and features are introduced every day that change where and how users get and share information. The rise of Facebook is a great example: the social networking platform that didn’t exist in early 2004 filed paperwork last month to launch what is expected to be one of the largest IPOs in history. To put it in perspective, Ford Motor went public nearly forty years after it was founded.

This incredible pace of innovation is seen throughout the Internet, and since Google’s public disclosure of its Federal Trade Commission antitrust investigation just this past June, there have been many dynamic changes to the landscape of the Internet Search market. And as the needs and expectations of consumers continue to evolve, Internet search must adapt – and quickly – to shifting demand.

One noteworthy development was the release of Siri by Apple, which was introduced to the world in late 2011 on the most recent iPhone. Today, many consider it the best voice recognition application in history, but its potential really lies in its ability revolutionize the way we search the Internet, answer questions and consume information. As Eric Jackson of Forbes noted, in the future it may even be a “Google killer.”

Of this we can be certain: Siri is the latest (though certainly not the last) game changer in Internet search, and it has certainly begun to change people’s expectations about both the process and the results of search. The search box, once needed to connect us with information on the web, is dead or dying. In its place is an application that feels intuitive and personal. Siri has become a near-indispensible entry point, and search engines are merely the back-end. And while a new feature, Siri’s expansion is inevitable. In fact, it is rumored that Apple is diligently working on Siri-enabled televisions – an entirely new market for the company.

The past six months have also brought the convergence of social media and search engines, as first Bing and more recently Google have incorporated information from a social network into their search results. Again we see technology adapting and responding to the once-unimagined way individuals find, analyze and accept information. Instead of relying on traditional, mechanical search results and the opinions of strangers, this new convergence allows users to find data and receive input directly from people in their social world, offering results curated by friends and associates. Continue reading →