Google’s announcement this week of plans to expand to dozens of more cities got me thinking about the broadband market and some parallels to transportation markets. Taxi cab and broadband companies are seeing business plans undermined with the emergence of nimble Silicon Valley firms–Uber and Google Fiber, respectively.

The incumbent operators in both cases were subject to costly regulatory obligations in the past but in return they were given some protection from competitors. The taxi medallion system and local cable franchise requirements made new entry difficult. Uber and Google have managed to break into the market through popular innovations, the persistence to work with local regulators, and motivated supporters. Now, in both industries, localities are considering forbearing from regulations and welcoming a competitor that poses an economic threat to the existing operators.

Notably, Google Fiber will not be subject to the extensive build-out requirements imposed on cable companies who typically built their networks according to local franchise agreements in the 1970s and 1980s. Google, in contrast, generally does substantial market research to see if there is an adequate uptake rate among households in particular areas. Neighborhoods that have sufficient interest in Google Fiber become Fiberhoods.

Similarly, companies like Uber and Lyft are exempted from many of the regulations governing taxis. Taxi rates are regulated and drivers have little discretion in deciding who to transport, for instance. Uber and Lyft drivers, in contrast, are not price-regulated and can allow rates to rise and fall with demand. Further, Uber and Lyft have a two-way rating system: drivers rate passengers and passengers rate drivers via smartphone apps. This innovation lowers costs and improves safety: the rider who throws up in cars after bar-hopping, who verbally or physically abuses drivers (one Chicago cab driver told me he was held up at gunpoint several times per year), or who is constantly late will eventually have a hard time hailing an Uber or Lyft. The ratings system naturally forces out expensive riders (and ill-tempered drivers).

Interestingly, support and opposition for Uber and Google Fiber cuts across partisan lines (and across households–my wife, after hearing my argument, is not as sanguine about these upstarts). Because these companies upset long-held expectations, express or implied, strong opposition remains. Nevertheless, states and localities should welcome the rapid expansion of both Uber and Google Fiber.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions. Of course, there is a decades-long debate about when deregulation turns into subsidies, and this conversation applies to Uber and Google Fiber.

That debate is important, but regulators and policymakers should take every chance to roll back the rules of the past–not layer on more mandates in an ill-conceived attempt to “level the playing field.” Transportation and broadband markets are changing for the better with more competition and localities should generally stand aside.

I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.

While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.

Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.

The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company. Continue reading →

After more than a year of complaining about Google and being met with responses from me (see also here, here, here, here, and here, among others) and many others that these complaints have yet to offer up a rigorous theory of antitrust injury — let alone any evidence — FairSearch yesterday offered up its preferred remedies aimed at addressing, in its own words, “the fundamental conflict of interest driving Google’s incentive and ability to engage in anti-competitive conduct. . . . [by putting an] end [to] Google’s preferencing of its own products ahead of natural search results.”  Nothing in the post addresses the weakness of the organization’s underlying claims, and its proposed remedies would be damaging to consumers.

FairSearch’s first and core “abuse” is “[d]iscriminatory treatment favoring Google’s own vertical products in a manner that may harm competing vertical products.”  To address this it proposes prohibiting Google from preferencing its own content in search results and suggests as additional, “structural remedies” “[r]equiring Google to license data” and “[r]equiring Google to divest its vertical products that have benefited from Google’s abuses.”

Tom Barnett, former AAG for antitrust, counsel to FairSearch member Expedia, and FairSearch’s de facto spokesman should be ashamed to be associated with claims and proposals like these.  He better than many others knows that harm to competitors is not the issue under US antitrust laws.  Rather, US antitrust law requires a demonstration that consumers — not just rivals — will be harmed by a challenged practice.  He also knows (as economists have known for a long time) that favoring one’s own content — i.e., “vertically integrating” to produce both inputs as well as finished products — is generally procompetitive. Continue reading →

In a [recent post](http://www.forbes.com/sites/timothylee/2012/09/08/the-weird-economics-of-utility-networks/), Tim Lee does a good job of explaining why facilities-based competition in broadband is difficult. He writes,

>As Verizon is discovering with its FiOS project, it’s much harder to turn a profit installing the second local loop; both because fewer than 50 percent of customers are likely to take the service, and because competition pushes down margins. And it’s almost impossible to turn a profit providing a third local loop, because fewer than a third of customers are likely to sign up, and even more competition means even thinner margins.

Tim thus concludes that

>the kind of “facilities-based” competition we’re seeing in Kansas City, in which companies build redundant networks that will sit idle most of the time, is extremely wasteful. In a market where every household has n broadband options (each with its own fiber network), only 1/n local loops will be in use at any given time. The larger n is, the more resources are wasted on redundant infrastructure.

I don’t understand that conclusion. You would imagine that redundant infrastructure would be built only if it is profitable to its builder. Tim is right we probably should not expect more than a few competitors, but I don’t see how more than one pipe is necessarily wasteful. If laying down a second set of pipes is profitable, shouldn’t we welcome the competition? The question is whether that second pipe is profitable without government subsidy.

That brings me to a larger point: I think what Tim is missing is what makes Google Fiber so unique. Tim is assuming that all competitors in broadband will make their profits from the subscription fees they collect from subscribers. As we all know, that’s not [how Google tends to operate](http://elidourado.com/blog/theory-of-google/). Google’s primary business model is advertising, and that’s likely from [where they expect their return to come](http://community.nasdaq.com/News/2012-08/google-seeking-more-ad-impressions-with-fast-fiber.aspx?storyid=162788). One of Google Fiber’s price points is [free](http://www.techdirt.com/blog/innovation/articles/20120726/11200919842/google-fiber-is-official-free-broadband-up-to-5-mbps-pay-symmetrical-1-gbps.shtml), so we might expect greater adoption of the service. That’s disruptive innovation that could sustainably increase competition and bring down prices for consumers–without a government subsidy.

Kansas City sadly gave Google all sorts of subsidies, like free power and rackspace for its servers as [Tim has pointed out](http://arstechnica.com/tech-policy/2012/09/how-kansas-city-taxpayers-support-google-fiber/), but it also cut serious red tape. For example, there is no build-out requirement for Google Fiber, a fact [now bemoaned](http://www.wired.com/business/2012/09/google-fiber-digital-divide/) by digital divide activists. Such requirements, I would argue, are the [true cause](http://news.cnet.com/How-to-squelch-growth-of-the-high-speed-Net/2010-1034_3-6106690.html) of the unused and wasteful overbuilding that Tim laments.

So what matters more? The in-kind subsidies or the freedom to build only where it’s profitable? I think that’s the empirical question we’re really arguing about. It’s not a forgone conclusion of broadband economics that [there can be only one](http://www.youtube.com/watch?v=4AoOa-Fz2kw). And do we want to limit competition in part of a municipality in order to achieve equity for the whole? That’s another question over which “original recipe” and bleeding-heart libertarians may have a difference of opinion.

Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy.

Executive Summary

For three years now the Obama Administration and the Federal Communications Commission (FCC) have been pushing for national broadband connectivity as a way to strengthen our economy, spur innovation, and create new jobs across the country. They know that America requires more private investment to achieve their vision. But, despite their good intentions, their policies haven’t encouraged substantial private investment in communications infrastructure. That’s why the launch of Google Fiber is so critical to policymakers who are seeking to promote investment in next generation networks.

The Google Fiber deployment offers policymakers a rare opportunity to examine policies that successfully spurred new investment in America’s broadband infrastructure. Google’s intent was to “learn how to bring faster and better broadband access to more people.” Over the two years it planned, developed, and built its ultra high-speed fiber network, Google learned a number of valuable lessons for broadband deployment – lessons that policymakers can apply across America to meet our national broadband goals.

To my surprise, however, the policy response to the Google Fiber launch has been tepid. After reviewing Google’s deployment plans, I expected to hear the usual chorus of Rage Against the ISP from Public KnowledgeFree Press, and others from the left-of-center, so-called “public interest” community (PIC) who seek regulation of the Internet as a public utility. Instead, they responded to the launch with deafening silence.

Maybe they were stunned into silence. Google’s deployment is a real-world rejection of the public interest community’s regulatory agenda more powerful than any hypothetical. Google is building fiber in Kansas City because its officials were willing to waive regulatory barriers to entry that have discouraged broadband deployments in other cities. Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy. Continue reading →

On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.

Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan.  Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .

Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals.  Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks.  And then go build a lot more of them.

Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year.  (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)

US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption.  It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.

I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem:  the U.S. is nearing a dangerous stalemate in its communications infrastructure.  We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband.  Right now, ultra high-speed broadband is technically possible by running fiber to the home.  Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.

Continue reading →

Six months may not seem a great deal of time in the general business world, but in the Internet space it’s a lifetime as new websites, tools and features are introduced every day that change where and how users get and share information. The rise of Facebook is a great example: the social networking platform that didn’t exist in early 2004 filed paperwork last month to launch what is expected to be one of the largest IPOs in history. To put it in perspective, Ford Motor went public nearly forty years after it was founded.

This incredible pace of innovation is seen throughout the Internet, and since Google’s public disclosure of its Federal Trade Commission antitrust investigation just this past June, there have been many dynamic changes to the landscape of the Internet Search market. And as the needs and expectations of consumers continue to evolve, Internet search must adapt – and quickly – to shifting demand.

One noteworthy development was the release of Siri by Apple, which was introduced to the world in late 2011 on the most recent iPhone. Today, many consider it the best voice recognition application in history, but its potential really lies in its ability revolutionize the way we search the Internet, answer questions and consume information. As Eric Jackson of Forbes noted, in the future it may even be a “Google killer.”

Of this we can be certain: Siri is the latest (though certainly not the last) game changer in Internet search, and it has certainly begun to change people’s expectations about both the process and the results of search. The search box, once needed to connect us with information on the web, is dead or dying. In its place is an application that feels intuitive and personal. Siri has become a near-indispensible entry point, and search engines are merely the back-end. And while a new feature, Siri’s expansion is inevitable. In fact, it is rumored that Apple is diligently working on Siri-enabled televisions – an entirely new market for the company.

The past six months have also brought the convergence of social media and search engines, as first Bing and more recently Google have incorporated information from a social network into their search results. Again we see technology adapting and responding to the once-unimagined way individuals find, analyze and accept information. Instead of relying on traditional, mechanical search results and the opinions of strangers, this new convergence allows users to find data and receive input directly from people in their social world, offering results curated by friends and associates. Continue reading →

Given the importance of privacy self-help—that is, setting your browser to control what it reveals about you when you surf the Web—I was concerned to hear that Google, among others, had circumvented third-party cookie blocking that is a default setting of Apple’s Safari browser. Jonathan Mayer of Stanford’s Center for Internet and Society published a thorough and highly technical explanation of the problem on Thursday.

The story starts with a flaw in Safari’s cookie blocking. Mayer notes Safari’s treatment of third-party cookies:

Reading Cookies Safari allows third-party domains to read cookies.

Modifying Cookies If an HTTP request to a third-party domain includes a cookie, Safari allows the response to write cookies.

Form Submission If an HTTP request to a third-party domain is caused by the submission of an HTML form, Safari allows the response to write cookies. This component of the policy was removed from WebKit, the open source browser behind Safari, seven months ago by Google engineers. Their rationale is not public; the bug is marked as a security problem. The change has not yet landed in Safari.

Mayer says Google was exploiting this yet-to-be-closed loophole to install third-party cookies, the domain of which Safari would then allow to write cookies. After describing “(relatively) straightforward” cookie synching, Mayer says:

But we noticed a special response at the last step for Safari browsers. … Instead of responding with the “_drt_” cookie, the server sends back a page that includes a form and JavaScript to submit the form (using POST) to its own URL.

Third-party cookie blocking evaded, and users’ preferences frustrated.

Ars Technica has published Google’s response, which doesn’t seem to have gone up on any of its blogs, in full. Google says they created this functionality to deliver better services to their users, but doing so inadvertently allowed Google advertising cookies to be set on the browser.

I don’t know that I’m technically sophisticated enough to register a firm judgement, but it looks to me like Google was faced with an interesting dilemma: They had visitors who were signed in to their service and who had opted to see personalized ads and other content, such as ‘+1′s but those same visitors had set their browsers contrary to those desires. Google chose the route better for Google, defeating the browser-set preferences. That, I think, was a mistake.

I wonder if there isn’t some Occam’s Razor that a Google engineer might have applied at some point in this process, thinking, “Golly, we are really going to great lengths to get around a browser setting. Are we sure we should be doing this?” Maybe it would have been more straightforward to highlight to Safari users that their settings were reducing their enjoyment of Google’s services and ads, and to invite those users to change their settings. This, and urging Apple to fix the browser, would have been more consistent with the company’s credo of non-evil.

Now, to the ideological stuff, of which I can think of two items:

1) There is a battle for control of earth out there—well, a battle over whether third-party cookie blocking is good or bad. Have your way advocates. I think the consuming public—that is, the market—should decide.

2) There is a battle to make a federal case out of every privacy transgression. An advocacy group called Consumer Watchdog (which has been prone to privacy buffoonery in the past) hustled out a complaint to the Federal Trade Commission. I think the injured parties should be compensated in full for their loss and suffering, of which there wasn’t any. De minimis non curat lex, so this is actually just a learning opportunity for Google, for browser authors, and for the public.

Kudos and thanks are due to Jonathan Mayer, as well as ★★★★★ and Ashkan Soltani, for exposing this issue.

[Cross-posted at Reason.org]

This week Google announced that it is grouping 60 of its Web services, such as Gmail, the Google+ social network, YouTube and Google Calendar, under a single privacy policy that would allow the company to share user data between any of those services. These changes will be effective March 1.

Although we have yet to see it play out in practice, this likely means that if you use Google services, the videos you play on YouTube may automatically be posted to your Google+ page. If you’ve logged an appointment in your Google calendar, Google may correlate the appointment time with your current location and local traffic conditions and send you an email advising you that you risk being late.

At the same time, if you’ve called in sick with the intention of going fishing, that visit to the nearby state park might show up your Google+ page, too.

The policy, however, will not include Google’s search engine, Google’s Chrome web browser, Google Wallet or Google Books.

The decision quickly touched off discussion as to whether Google was pushing the collection and manipulation too far. The Federal Trade Commission is already on its back over data sharing and web tracking. With this latest decision, although it’s not that far from how Facebook, Hotmail and Foursquare work, just more streamlined, Google, some say, is all but flouting user and regulatory concerns.

Continue reading →

Over at TIME.com, [I write that](http://techland.time.com/2012/01/17/why-googles-biggest-problem-isnt-antitrust-with-search-plus-your-world/) while some claim that Google Search Plus Your World violates antitrust laws, it likely doesn’t. But I note that Google does have a big problem on its hands: market reaction.

>So if antitrust is not Google’s main concern, what is? It’s that user reaction to SPYW and other recent moves may invite the very switching and competitive entry that would have to be impossible for monopoly to hold. … Users, however, may not wait for the company to get it right. They can and will switch. And sensing a weakness, new competitors may well enter the search space. The market, therefore, will discipline Google faster than any antitrust action could.

Read [the whole thing here](http://techland.time.com/2012/01/17/why-googles-biggest-problem-isnt-antitrust-with-search-plus-your-world/).