November 2010

By Ryan Radia and Wayne Crews

Today, the European Commission opened a formal antitrust investigation into Google to probe allegations that the firm rigged its search engine to discriminate against rivals. This intervention in the online search market, however, will distort the market’s evolution, discourage competitors from innovating, and ultimately hurt consumers.

Google isn’t a monopoly now, but the more it tries to become one, the better it will be for us all. When capitalist enterprises strive to earn a bigger market share, rival firms are forced to respond by trying to improve their offerings. Even if Google is delivering biased search results, it is only paving the way for competitors to break into the search market.

The European Commission is wrong to assume that Google possesses monopoly power. Google accounts for just 6 percent of all dollars spent on advertising in Europe. And even loyal Google users regularly find websites through competing search engines like Bing or through social websites like Facebook and Twitter.

Before resorting to tired old competition laws, European policy makers should remember that the Internet economy is hardly understood by anybody—including by regulators. We are in terra incognita; no one knows how information markets will evolve. But one thing is for sure: Online search technology cannot evolve properly if it is improperly regulated. Why make risky investments in hopes of revolutionizing Internet markets if marvelous success means regulation and confiscation?

The real threat to consumers is not from successful high-tech firms like Google, but from overreaching government interventions into competitive market processes. As economists have documented in scholarly journals, antitrust intervention is especially problematic in the information age, because it severely underestimates the critical role of innovation in dynamic high-tech markets. Continue reading →

Earlier today I spoke at the Brookings Institution event “The Future of E-rulemaking: Promoting Public Participation and Efficiency,” which was co-sponsored with the Administrative Conference of the United States. I made two points: we have not yet achieved regulatory transparency, and wiki-government does not overcome Hayek’s knowledge problem. What follows are my remarks.

When we talk about e-rulemaking, we often think about a first generation and a second generation of e-rulemaking.

The first generation is focused on making available online all of the information related to regulation and the rulemaking process, as well as making it simple for citizens to participate electronically in traditional rulemaking. In this way we improve the transparency and accountability of the regulatory process.

The second generation moves beyond the basics to leverage the new social technologies of the internet to increase citizen participation and enhance agency expertise. This is the exciting stuff of using Twitter and Facebook and wikis and collaborative commenting systems to achieve a truly democratic, efficient, and responsive rulemaking process. And while I’m very excited by the prospect of this transformation, I feel I have to suggest some caution.

For one thing, I’m not sure we have successfully graduated from the first generation. Less than two years ago we launched [OpenRegs.com](http://openregs.com) because [Regulations.gov](http://regulations.gov) did not offer something as simple as RSS feeds and had a less than ideal user interface. Since then it has been much improved, but if we look at the recommendations of the [ABA Administrative Law Section’s report on e-rulemaking](http://ceri.law.cornell.edu/erm-comm.php) — in which so many of the folks I see here today participated — or the recommendations of [OMB Watch’s Task Force on e-rulemaking](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1292911), we can see that we’re a long way from where we should be to say that the first generation is complete. Continue reading →

On the podcast this week, Peter Thiel, co-founder of PayPal, early investor in Facebook, and president of Clarium Capital, discusses the stagnation of technological innovation. Thiel gives reasons why innovation has slowed recently — offering examples of stalled sectors such as space exploration, transportation, energy, and biotechnology — while pointing out that growth in internet-based technologies is a notable exception. He aslo comments on political undercurrents of Silicon Valley, government regulation, privacy and Facebook, and his new fellowship program that will pay potential entrepreneurs to “stop out” of school for two years.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

Former TLF blogger Tim Lee returns with this guest post. Find him most of the time at the Bottom-Up blog.

Thanks to Jim Harper for inviting me to return to TLF to offer some thoughts on the recent Adam ThiererTim Wu smackdown. I’ve recently finished finished reading The Master Switch, and I didn’t have have my friend Adam’s viscerally negative reactions.

To be clear, on the policy questions raised by The Master Switch, Adam and I are largely on the same page. Wu exaggerates the extent to which traditional media has become more “closed” since 1980, he is too pessimistic about the future of the Internet, and the policy agenda he sketches in his final chapter is likely to do more harm than good. I plan to say more about these issues in future writings; for now I’d like to comment on the shape of the discussion that’s taken place so far here at TLF, and to point out what I think Adam is missing about The Master Switch.

Here’s the thing: my copy of the book is 319 pages long. Adam’s critique focuses almost entirely on the final third of the book, (pages 205-319) in which Wu tells the history of the last 30 years and makes some tentative policy suggestions. If Wu had published pages 205-319 as a stand-alone monograph, I would have been cheering along with Adam’s response to it.

But what about the first 200-some pages of the book? A reader of Adam’s epic 6-part critique is mostly left in the dark about their contents. And that’s a shame, because in my view those pages not only contain the best part of the book, but they’re also the most libertarian-friendly parts.

Those pages tell the history of the American communications industries—telephone, cinema, radio, television, and cable—between 1876 and 1980. Adam only discusses this history in one of his six posts. There, he characterizes Wu as blaming market forces for the monopolization of the telephone industry. That’s not how I read the chapter in question. Continue reading →

I was quoted this morning in Sara Jerome’s story for The Hill on the weekend seizures of domain names the government believes are selling black market, counterfeit, or copyright infringing goods.

The seizures take place in the context of an on-going investigation where prosecutors make purchases from the sites and then determine that the goods violate trademarks or copyrights or both.

Several reports, including from CNET, The Washington Post and Techdirt, wonder how it is the government can seize a domain name without a trial and, indeed, without even giving notice to the registered owners.

The short answer is the federal civil forfeiture law, which has been the subject of increasing criticism unrelated to Internet issues.  (See http://law.jrank.org/pages/1231/Forfeiture-Constitutional-challenges.html for a good synopsis of recent challenges, most of which fail.) Continue reading →

Milton Mueller, a professor at Syracuse University’s School of Information Studies, is a familiar figure to anyone who follows Internet governance issues.  He has established himself as a leading Net governance guru thanks to his extensive academic record in this field with books like Ruling the Root: Internet Governance and the Taming of Cyberspace (2002) and his work with The Internet Governance Project and the Global Internet Governance Academic Network.  Mueller’s latest book, Networks and States: The Global Politics of Internet Governance, continues his exploration of the forces shaping Internet policy across the globe.

The de Tocqueville of Cyberspace

What Mueller is doing – better than anyone else, in my opinion – is becoming the early chronicler of the unfolding Internet governance scene.  He meticulously reports on, and then deconstructs, ongoing governance developments along the cyber-frontier.  He is, in effect, a sort of de Tocqueville for cyberspace; an outsider looking in and asking questions about what makes this new world tick.  Fifty years from now, when historians look back on the opening era of Internet governance squabbles, Milton Mueller’s work will be among the first things they consult.

Mueller’s goal in Networks and States is two-fold and has both an empirical and normative element.  First, he aims to extend his exploration of the actors and forces affecting Internet governance debates and then develop a framework and taxonomy to better map and understand these forces and actors. He does a wonderful job on that front, even though many Net governance issues (especially those related to domain name system issues and ICANN) can be incredibly boring.  Mueller finds a way to make them far more interesting, especially by helping to familiarize the reader with the personalities and organizations that increasingly dominate these debates and the issues and principles that drive their actions or activism.

Mueller’s second goal in Networks and States is to breathe new life into the old cyber-libertarian philosophy that was more prevalent during the Net’s founding era but has lost favor today.  I plan to discuss this second goal in more detail here because Mueller has done something quite important in Networks and States: He has issued a call to arms to those who care about classical liberalism telling us, in effect, to get off our duffs and get serious about the fight for Internet freedom. Continue reading →

Proponents of Net neutrality regulation continue their full-court press to get the Federal Communications Commission (FCC) and its chairman, Julius Genachowski, to unilaterally push through a new industrial policy regime for the Internet. The latest word, according to Politico, is that the agency is pushing back its scheduled December open meeting from Dec. 15 to Dec. 21 to give the agency more time to plot its next move.  There’s no word yet what the agency’s regulatory blueprint will look like, so it’s impossible to critique the agency’s plan at this point.  I’ve made the case against Net neutrality regulation here before, however, and I’m sure those same concerns and critiques will apply to whatever the agency ends up adopting.

What’s most concerning about the way this process is playing out currently is just how anti-democratic it is.  I understand the zeal of the pro-regulatory forces on this issue, but there is simply no good excuse for advocating that 3 unelected officials at an independent regulatory agency rush through a vote to regulate a such a massive and important sector of the American economy.

It used to be the case that a broad and non-partisan coalition of academics and organizations supported the non-delegation principle, which, generally speaking, refers to the notion that only democratically elected officials should be in a position to pass laws and make the really important decisions about the future course of our polity and its economy.  Of course, when it comes to the economy, I’d prefer most of those decisions be left to marketplace experimentation.  However, to the extent regulation is deemed necessary and that regulation governs such a massively important portion of the American economy, that determination should definitely be made by elected leaders in Congress and not delegated to bureaucrats who would ram through regulations with 3 votes and sketchy plan for reordering that sector. Continue reading →

Kudos on Open Kinect

by on November 24, 2010 · 3 comments

After freak-outs and backpedaling, Microsoft has revised its stance on the so-called “hacks” of the Kinect.  Wired’s Tim Carmody reported on Monday that Microsoft seems to have indicated that it won’t be taking legal action against anyone who has found new and “unsupported” uses for the Kinect.  Shannon Loftis and Alex Kipman—two Microsofties involved in the creation of the Kinect—were featured on NPR’s Science Friday and when asked if anyone would “get in trouble” for their Kinect creations, they responded with “No” and “Nope, absolutely not” respectively.

This is a refreshing change of course from Redmond.  Embracing your most enthusiastic fans and harnessing their creative power for the betterment of your product certainly makes a heck of a lot more sense than prosecuting those folks under the DMCA.

To be fair, Carmody notes that Microsoft had reason to hold off on taking this stance immediately.  Microsoft wanted to verify that the Kinect was being used as-is, as opposed to anything in the XBOX 360 being modified.  This is incredibly important, because, as Carmody succinctly notes:

If Kinect’s whole-room camera, robust facial-recognition software, and portal for video and audio chat are seen as insecure, it’s a nightmare.

Too true.  Microsoft’s sensitivity on the topic is easy to understand when this massive security concern is taken into consideration.  However, it seemed evident from the get-go that all of these “hacks” had nothing to do with hijacking the XBOX’s software for the Kinect, but rather simply plugging the hardware it into another device entirely—namely a PC running Windows or Linux.

So, kudos to Microsoft on sorting out their feelings when it comes to the Kinect.  Too bad they had to do so in public.

Netflix Blows It All Up

by on November 23, 2010 · 4 comments

So now you can pay Netflix $7.99 a month and stream all the video you want? Damn cool if you ask me!

What does the Netflix decision mean for consumers—two words: More choice! This is what functional markets deliver. There was a time when if you missed an episode of your favorite show, that was it. You might have gotten lucky and caught it on its single rerun, but that was hit or miss. These days, I can watch The Office at 8 p.m. on Thursday nights. Or I can record on my DVR and watch it later that night. Or I can watch it the next day on my PC by visiting nbc.com. Or I can watch it on-demand from my cable box. Or I can wait a few months and watch it on DVD. Or, soon, via Netflix stream.

I can’t help but wonder if this makes moot all the handwringing about the FCC’s desire to place conditions on the online services a merged Comcast-NBC Universal can offer. Come on, Netflix has blown up the whole cable TV model. Continue reading →

On the podcast this week, Tyler Cowen, professor of economics at George Mason University, general director of the Mercatus Center, and founder of the popular economics blog Marginal Revolution, answers questions from Surprisingly Free listeners and Marginal Revolution readers. Cowen discusses why people will be appalled that we ever questioned intrusive searches by TSA, what should have been done to minimize unemployment and other harm from the financial crisis, how the “famous American formula” for good government is broken, what might force us to sit around opening cans of dog food with our teeth, and which global sites should be connected by Stargate portals to create the most value. He also asks, “Why read books?”, speculates about the value of his blog, addresses price discrimination of chicken McNuggets, talks about a modern day Athens in Asia with good food, suggests that internet comments are a relatively harmless form of stupidity, and opines about the best thing that government does.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?